Monday, December 9, 2019

AV #203 - The Capitol versus the Classroom - 3 examples


The good intentions behind legislation –> how it is experienced in our schools

how the unique context of each high school … could doom implementation.[i]


**

The gap between policymakers and practitioners, on education issues, is well known. A few examples will follow; no doubt you can think of others. It might be said that the divide is the world of Theory versus actual Experience. Or maybe it is Reason versus Emotion; much thought and debate by legislators and advocates produces a bill—but in the classroom, it feels wrong, or insulting, or too prescriptive.

Can we do anything about this? Or is the distance between legislators crafting bills and educators in schools working with students inevitable? Will the good intentions behind new legislation continue to breed frustration, even resentment, from those asked (or, at times, required) to carry out the new policies?

Three major pieces of education policy—from the past 10 years—come to mind.


   In The Make-or-Break Year, by Emily Krone Phillips, we read of a largely successful initiative to support ninth graders in a number of Chicago high schools. In Phillips’ opinion, the effort proved effective because she saw “teachers, principals, and policymakers working together … to solve a common problem, in contrast to the typical reform script in which solutions are imposed from above, without much thought given to the complexities on the ground” (p.17).
2009 - SB 163 – EDUCATION ACCOUNTABILITY ACT – Look at the original language. The positive intent is clear. Among the goals in the legislative declaration (22-11-102): “Reports information … that is perceived by educators, parents, and students as fair, balanced, cumulative credible and useful”; “provides support for improvement at each level … ensuring that educators have the data necessary to assist the neediest students in making more than a year’s academic growth in a year’s time so that these students can catch up ….”

Not one reference to “punishing” schools, except this: “to move from a punitive accountability system to one that is focused on learning and achieving high levels of academic performance.” (Emphasis mine.)

But a decade later, some school leaders complain the Accountability Act, for all its admirable goals, is indeed punitive. They say how “hurtful” the School Ratings can be—to be labeled, or judged, as on Priority Improvement or Turnaround. This past year several even claimed it will sting terribly to earn “just” the second highest rating, on Improvement.[ii] Hard to sympathize there. Many schools on Improvement show over 50% of their students not meeting expectations in literacy and math. And while districts and schools have good reason to point out the flaws in SB 163, they appear defensive. The law has provided us with useful data that reveals how far short we are of meeting our own goals, but some educators seem focused on shooting the messenger.


2010 - SB 191 - ENSURING QUALITY INSTRUCTION THROUGH EDUCATOR EFFECTIVENESS – It is hard to argue with the underlying assumption in the law: “to evaluate the effectiveness of licensed personnel is crucial to improving the quality of education in the state.” Again, the language was upbeat: “Each teacher is provided with an opportunity to improve his or her effectiveness through a teacher development plan that links his or her evaluation and performance standards to professional development opportunities.” Sounded good!

But as implemented, SB 191 felt to many teachers like They Were Out to Get Me. The young teacher could feel: After all my school went through to learn about me and evaluate my abilities before I was hired, why so little trust? I’d much prefer a good mentor to all these observations by my principal. And the demand that I gather all this “evidence” to prove I am doing my job? I never have that conversation about how I can improve—it’s just compliance with a state mandate. Even my principal seems less than thrilled at this new burden. Or why else, when she brought me in to her office, did she shrug and say, “OK, let’s get this done!”—and then turn her back to me, open her computer, and type out her responses on the nine-page rubric?


2012 – HB 1238 -The READ Act (Colorado Reading to Ensure Academic Development Act) made great sense to our lawmakers. The Senate passed it 35-0. Knowing that the most recent state test at the time (2011[iii]) showed 35% of our 4th grade students not reading at grade level, many cheered the law’s focus: “early literacy development for all students and especially for students at risk to not read at grade level by the end of the third grade.”[iv] Of course! And for this purpose, the state would gladly commit over $30 million a year.[v]

   “Like so many large-scale, top-down initiatives before it, the [FILL IN THE BLANK] contained the germ of a good idea, but it was insensitive to the realities on the ground.”[vi]
In the classroom—especially in our lowest-achieving schools where a majority of students in a first or second-grade class might be on a READ Plan—implementation can be overwhelming. What policymaker expected teachers would need to pull 20 out of 26 students for those one-on-one assessments three times a year? And while in the hallway testing one child, who is teaching the other 25 kids? What about the time needed to complete the “intervention plans” for all 20?

And now a reaction from last spring’s update to the READ Act (SB19-199). Districts are upset the state wants to take more control in how the $33 million is spent; teachers are miffed about required training on effective strategies to support struggling readers. The veteran teacher might well ask: Who says the state knows best? Though one must ask: If K-3 teachers are doing this so well, how come each year we still see roughly 15% of our K-3 students—close to 40,000 kids[vii]struggling to read?

I explore the gap not to take sides but to shed light on the dilemma.  A simple reminder that the end result of what seems well designed under the gold dome often proves more frustrating than beneficial for our schools.

No answers here. But can the gap—or it chasm—between policy and practice be narrowed? Two suggestions:

1. Policymakers: We must hear from the practitioners in drafting new legislation. If the voice of educators had been heard during the writing of such important bills as those above, perhaps we could have prevented some of the outcomes that add to the distrust and disillusionment from the field. Why not pilot efforts before going statewide with anything? Why not borrow from the scientific method[viii]--start small, gather preliminary findings, foster peer review, etc.? And once a law is in place, policymakers must follow up. Expect surprises. Listen to educators to learn what is and is not working. Our laws are not set in stone. It’s OK to rewrite!

2. Educators: Many of us believe key decisions should be made at the school level (See AV #201-Autonomy, independence, self-governance[ix]). It is natural for us to be wary of legislation that presumes to be a good fit for all, e.g., for DPS and JeffCo and Cherry Creek as well as for Colorado’s 107 small rural districts, each with under 1,000 students. And yet we can’t ignore the evidence: a huge percentage of our students are not meeting our own academic standards. Let’s begin there. Accept that change is needed, that our overly defensive posture—what are those know-it-alls at the Capitol trying to force on us now?—comes across as both proud and blind. It is as if we are settling for the status quo. That is not an option. Who among us can argue that all is well?

Yes, we all know that, no matter how well education policy is written, there will be unintended consequences. But we can be more careful. Good intentions are not enough. Perhaps the well-known medical injunction is also a useful reminder in crafting laws that impact schools. Make sure they Do No Harm.



Endnotes




[i] The Make-or-Break Year, by Emily Krone Phillips, 2019 (p. 85).                                                           
[ii] “Deirdre Pilch, superintendent of Greeley-Evans School District 6, said the changes will be a blow to schools where staff worked hard to reach the top ranking.
“‘That will be so demoralizing to those kids and those parents and those teachers,’ she said.”
[v] From the four most recent Annual Reports on the Colorado READ Act:
-“In the spring of 2015, districts reported 36,420 students as having a significant reading deficiency. Approximately $33 million was distributed in per-pupil intervention funds….” http://www.cde.state.co.us/coloradoliteracy/readactlegislativebrief2016
-2016-17 – “STATE TOTAL -  39,014 [students identified as SRD] - $33,047,438.” http://www.cde.state.co.us/coloradoliteracy/readactreport51817npf
-“For the 2017-2018 school year, the total amount of funds available for distribution to districts was approximately $33 million. In the spring of 2017, districts reported 40,533 students as having a significant reading deficiency.” http://www.cde.state.co.us/coloradoliteracy/coloradoreadactreport
 – “For the 2018-19 school year, the total amount of funds available for distribution to districts was approximately $33 million. In the spring of 2018, districts reported 39,614 students as having an SRD” [significant reading deficiency]. http://www.cde.state.co.us/coloradoliteracy/19readreportpdf.  (Bold mine)
[vi] *The Make-or-Break Year. The quote comes from the chapter on Chicago Mayor Rohm Emanuel’s initiative to make the school days longer throughout the school system. The exact quote reads:
“Like so many large-scale, top-down initiatives before it, the longer school day contained the germ of a good idea, but it was insensitive to the realities on the ground” (p. 285).
[vii] See Endnote v above.
[viii] “The common element in modern science, regardless of the specific field or the particular methods being used, is the critical scrutiny of claims. It’s this process—of tough, sustained scrutiny—that works to ensure that faulty claims are rejected and that accepted claims are likely to be right.
“A scientific claim is never accepted as true until it has gone through a lengthy process of examination by fellow scientists. This process begins informally, as scientists discuss their data and preliminary conclusions with their colleagues, their post-docs and their graduate students. Then the claim is shopped around at specialist conferences and workshops. This may result in the scientist collecting additional data or revising the preliminary interpretation; sometimes it leads to more radical revision, like redesigning the data collection program or scrapping the study altogether if it begins to look like a lost cause. If things are looking solid, then the scientist writes up the results. At this stage, there’s often another round of feedback, as the preliminary write-up is sent to colleagues for comment.”  “Put Your Faith in Science,” by Naomi Oreskes, Time, Nov. 18, 2019.
[ix] Another View’s website, https://anotherviewphj.blogspot.com/.

No comments:

Post a Comment