The Ratings Game
Teaching isn’t baseball—and it’s time we stopped obsessing about teachers' stats.
During the past couple of years, the fascination with measuring and rating teachers' impact on student achievement has gotten completely out of hand. It's as if education has been taken over by a group of fantasy baseball fanatics who have memorized Moneyball but failed to grasp that statistical analysis of players didn't really change the actual game. Statistical analysis of teaching has morphed from a promising area of research into an unhealthy obsession.
How did this happen? First, the data-obsessed school reform community plucked the idea of rating teachers from obscurity and began pushing it as something new. Then the reform-friendly Obama administration made it a disproportionately prominent part of its $4 billion Race to the Top grant competition, beating states and districts and teachers unions over the head with the notion that removing the "firewall" between student data and teacher performance would somehow save education.
Now, I'm no anti-testing, anti-accountability ideologue. Much to the disappointment of many classroom teachers and school administrators, I've made my peace with standardized testing. I'm one of the few people on the planet (besides some civil rights and disabilities groups) who still defend NCLB's much-loathed AYP school rating system. And I'm just as critical as anyone of the current system most districts use for hiring, paying, and evaluating teachers. That's a sorry mess that needs to be sorted out. But treating value-added ratings of teachers as the be-all and end-all for making schools better seems wasteful and unwise.
Measuring a teacher's impact is just a small part of the teacher quality issue, and the quality issue is just a small part of the overall education puzzle.
Pushing university-based education schools to do a better job of preparing teachers deserves much more energy from lawmakers. Fifteen years ago, education schools fought off proposals to make them accountable for the quality of the teachers they provide to school districts. Since then, report after report has suggested that teacher preparation remains inadequate. And yet, only D.C. and four Race states-Maryland, Massachusetts, New York, and Rhode Island-have indicated that they will make any real use of the ed-school performance data that they are being paid to post online. States that didn't win Race funding have little incentive, rhetorical or financial, to take action.
Teacher quality isn't the sole issue that warrants serious attention, either. Improving the quality and accessibility of early childhood education is one example. Five years ago there was widespread momentum toward making preschool universal. Only in the most recent (and much smaller) round of Race to the Top has there been any meaningful attention to early childhood issues.
Encouraging schools to provide preventative wraparound social services is another issue that should be on the front burner. For all the attention given to Geoffrey Canada's Harlem Children's Zone, wraparound services have remained a second- or even third-tier priority.
In the meantime, the 12 states that won grants (and many others that didn't) are spending inordinately large amounts of time, money, and political capital bringing teacher-rating systems into reality. They may or may not succeed. Either way, most schools likely won't be affected dramatically.
Inevitably, the data hounds and technocrats will continue with their statistical schemes. Already, law enforcement agencies, which led the race into data-based decision-making, boast that they can predict where crimes will occur and even who is likely to be a victim or perpetrator. Soon, statisticians will go from claiming they can measure teachers' impact on learning to claiming they can predict it. Or at least that's my prediction.