×

The Case for Meaningful Educator Evaluations, Now Even More Meaningful!

January 24, 2013 posted by Nithya Joseph and Eric Lerum

The debate over teacher evaluations gets caught up (or in the case of New York City, flat-out stuck) in rhetoric all too often. Indeed, ever since the Widget Effect exposed the farce behind most current teacher evaluation systems, the dialogue about how to assess performance in the classroom in a meaningful way frequently breaks down on divisive lines regarding the use of student growth data and whether classroom observations can really be carried out with efficiency and fairness. Lots of teachers are concerned about being strictly evaluated based on a single test score, even though we're not aware of a single district or state that has proposed such a system. The same goes for evaluating teachers solely based on test scores – but there are plenty of fights around that (non-) issue nonetheless. As a result, these polarizing conversations often make it difficult to focus on actual substantive issues that need to be worked through.

That's where the value of the Measures of Effective Teaching Project (MET), funded by the Bill and Melinda Gates Foundation, really comes to bear. Despite the fact that it has limitations worth noting, the MET report released earlier this month streamlines the conversation with real, based-in-a-classroom evidence in a way that hasn't been done before.

Specifically, MET makes two issues clearer. First, the MET report wades through the muck around how much weighting should be given to student growth and clearly tells us that student growth is a measure that matters -- and it matters a lot. After studying 3,000 teachers in seven districts, the report finds that student growth, when diluted past 33 percent, loses predictive power for future evaluations, and compromises the reliability and value of the total evaluation. This is no small matter. States all over the country are at the beginning stages of the implementation process for their evaluation systems, and nineteen states still do not require any evidence of student learning within their evaluations. The range of student growth measures recommended by the MET study -- 33 to 50 percent -- reinforces the pioneering work happening in places like Colorado, the District of Columbia, and Tennessee. Moreover, it should make it all that more difficult for states at the opposite end of the spectrum to continue to ignore the value of including student growth in evaluations.

Second, meaningful evaluation policies built on multiple measures include classroom observations, and on this measure, the MET study offers unique insight on the role of the observer, as well as the duration and number of observations. This is important for SEAs and LEAs getting into the thick of implementation. While Jay P. Greene brings up a number of valid points questioning just how reliable observations are as an evaluation measure, particularly given how costly those efforts can be, we believe observations are powerful and necessary because they provide space for ongoing feedback and dialogue between a teacher and observer. We are not aware of a better way to understand what's happening in the classroom – and therefore, how to understand the student learning data coming out of it – than through observation. Moreover, requiring principals to assume the role of instructional leader and to take the time to observe the teaching going on in their buildings can be hugely empowering for both the principal and the teachers.

One final note -- Andy Smarick weighed in with some interesting thoughts on what happens next with the MET recommendations and the need for the Gates crew to see their research findings through to practice. He rightfully asserts that it's going to take coordination among policymakers at all levels, as well as higher education players, consultants who are providing technical assistance, curriculum supporters, etc., in order to achieve change at scale across districts.

We think this should serve as a word to the wise for the education advocacy community as well, including national organizations like StudentsFirst (the ERAOs!) and the dozens of state- and locally-based organizations. Policy change is a hard slog; getting states to enact the strongest evaluation policies based on the research available is no easy feat. The MET findings and recommendations should provide a starting point from which we can all align. By starting with such a strong framework, we can avoid getting stuck on the issues that feed the worst compromises and result in watered-down versions of policy proposals that are no longer transformative or meaningful. Even better – with the MET study complete, there's no longer a need for a taskforce or committee to study the issue for another year instead. No more need to "kick the can."

Sure, the MET study doesn’t provide all of the answers, but it does a good job of giving us thoughtful, researched, actionable policy solutions that enable us to move forward. We can work with that.

Topics: Teacher Evaluation