More on Teacher Performance

We recently posted a piece on the controversy surrounding the publication of teacher performance evaluations in Los Angeles.  There are a couple of interesting follow ups circulating in our trusted news sources.  The first piece is from the always contrarian and sometimes cantankerous Jack Shafer of Slate.com.

Nobody but a schoolteacher or a union acolyte could criticize the Los Angeles Times‘ terrific package of stories—complete with searchable database—about teacher performance in the Los Angeles Unified School District.

You probably don’t need to be a communications major to figure out where that piece is headed.

Shafer rightly applauds the LA Times FAQ page on what value-added analysis is, its strenghts and weaknesses, and other, well, FAQs.

A second piece plucked from the New York Times compares the pros and cons of value-added analysis in a more straightforward fashion.  On the one hand:

“If these teachers were measured in a different year, or a different model were used, the rankings might bounce around quite a bit,” said Edward Haertel, a Stanford professor who was a co-author of the report. “People are going to treat these scores as if they were reflections on the effectiveness of the teachers without any appreciation of how unstable they are.”

On the other hand:

William L. Sanders, a senior research manager for a North Carolina company, SAS, that does value-added estimates for districts in North Carolina, Tennessee and other states, said that “if you use rigorous, robust methods and surround them with safeguards, you can reliably distinguish highly effective teachers from average teachers and from ineffective teachers.”

Certainly, the two sides each have a point.  Even as a man of numbers (or perhaps, especially as one), I worry that people put too much faith in a quantitative rating.  That said, it seems the cat has squirmed its way out of the bag on this one, and it is going to be difficult for opponents to get it back in.