By this time, most of you have heard of how the Los Angeles Times used a statistical technique called Value Added Modeling (VAM) to determine the good from the bad LA teachers. You're also probably aware that this technique is being lauded by groups from the US Department of Education, State Education departments, and others as a way to compensate and employ teachers. The teachers whose students improve the most get raises, while the ones whose students stagnate get fired.
The Economic Policy Institute released a report this week that shatters the notion that VAM is or should be used as a major factor in evaluating teachers.
The report found that:
- There are broad year-to-year inconsistencies. For example, one-third of the teachers who are highly ranked one year repeated, one-third moved to the middle, and one-third moved to the lowest rankings.
- While teachers did correlate with their students scores for a year, they just as strongly correlated with those students' scores for the previous year, something they couldn't possibly have effected.
Factors, outside of teachers having significant impact on student test scores include: influences of other teachers, specialists, or tutors, school attendance, at-home activities and environment, curriculum materials, family resources, and peers and friends. Teachers who have students who are learning English show statistically significantly worse results than teachers who have primarily native English language students.
Research shows that reliance on high stakes tests narrows and over-simplifies classroom focus to only what is being tested.
The report concludes that the results of VAM methods are too unstable, imprecise, and unreliable to be used for operational decisions.
Systematic observation is a much more reliable indicator of teacher effectiveness, when it is performed by well qualified peers or supervisors. Observations do not have to be made in-person, they can also be made by video, or review of lesson plans and student work.