×

AFT ‘Analysis’ is Defective and Distracting

January 10, 2013 posted by Eric Lerum

An honest assessment about student achievement in any state would conclude that we have a long way to go in ensuring every student is receiving the kind of education he or she deserves. Student achievement on the National Assessment of Educational Progress (NAEP) remains woefully low. In the highest-ranked state – Massachusetts – 49% of 8th graders are not proficient in math and 54% are not proficient in reading. And that's in the top-ranked state. In every other state, well over half of students are not where they need to be in math or reading. Not only that, but achievement gaps remain in the double digits in nearly every state. This is unacceptable.

There are many ways to attack this problem – policies, programs, initiatives, etc. All are important, and ultimately the real work has to happen in the classroom everyday. At StudentsFirst, we are focused on state policies and their role in enabling school and student success. The question we try to answer from a policy perspective is relatively simple: Can existing state policies be improved to provide better educational outcomes for students? Given the low overall achievement and large achievement gaps symptomatic in every state, I can't imagine any reason to defend the current policies in any state by talking about student achievement as if it's good enough.

And yet a few folks have decided to do just that, starting by pointing out that state grades on our 2013 State Policy Report Card do not incorporate or reflect student achievement. That is accurate. The StudentsFirst State Policy Report Card looks specifically at the policy environment in each state, evaluating how effectively states are setting up their schools systems for success. While we did not consider student achievement when grading each state, a misleading blog post, endorsed and quickly promoted by the American Federation of Teachers (AFT), among others, recently examined the correlation between the StudentsFirst-assigned GPA's by state and state student achievement on the NAEP. The blog states that these two indicators are inversely correlated, inferring the StudentsFirst policy positions are somehow bad for student achievement. And while the blog comes complete with graphical illustrations, calling their results "quite clear and unambiguous", the only problem is that these conclusions are not at all clear, and very ambiguous. Let's take a closer look.

First, this critique fails to look at the rate of student growth for the states ranked highest on the State Policy Report Card. In fact, these states show higher levels of growth than the national growth rate on NAEP, despite the fact that these same states face higher levels of poverty than the national average. Moreover, the states that received the lowest grades on the Policy Report Card typically have exhibited lower growth in student achievement over the past eight years than states receiving higher grades. Furthermore, in the bottom 20 states ranked in the State Policy Report Card, student achievement on the NAEP grew at half the rate of the national average, while the top 20 states demonstrated student achievement growth that was greater than the national average. In other words, states that are sitting back on their status quo policies are sitting still in terms of student achievement. States that have begun to change their policies also appear to catalyze growth in student learning. It's too early to correlate this growth to the new policies in most states, but a state like Florida – which has experienced steady growth and narrowing of its achievement gaps over the course of 10+ years of reform – provides reason to be optimistic. In sum, the simple snap shots of absolute achievement presented in critiques of our Report Card state rankings do not demonstrate any regard for baseline or change over time.

Second, the figures presented by the blog are statistically insignificant. While that may not mean a lot to some, it's actually pretty important if you're trying to make an assertion about a correlation. In this case, the assertion made – that there is a negative relationship between the StudentsFirst-assigned GPAs for state policy and statewide student achievement – is just flat out untrue. The blogger could have reached the correct conclusion – that there is no statistically significant correlation between the policy grade and the state's NAEP score – by doing a more sophisticated regression analysis. Adding a coefficient of determination, or R-squared value, to the regression can help explain how good one variable is at predicting another; in this case, how the policy score predicts NAEP scores. An R-squared can range from 0 to 1, with a 1 meaning that model has perfectly captured the relationship being examined, and a 0 indicating that the independent variables included in a model have absolutely no relation to the dependent variables. In this case, the blogger looked at the relationships between student achievement on NAEP 4th and 8th grade math and reading to the StudentsFirst-assigned GPAs by state. To find a strong correlation, the analysis should yield something close to 1.

Instead, examining both math and reading for both 4th and 8th grades, we computed the following R-squared for each as listed below:

  • 4th Grade Math .04
  • 4th Grade Reading .016
  • 8th Grade Math .07
  • 8th Grade Reading .06

Each comparison -- every single one -- yields an R-squared close to zero. In other words, there's almost no correlation, or no predictive power, between the current policy grades and existing NAEP scores. Below, we've reproduced the same regressions the blogger did, and added the important R-squared information next to each one.

4th Grade Math

4th Grade Reading

8th Grade Math

8th Grade Reading

Third, and perhaps most important to keep in mind when considering the data in this way, is that taking a single year of NAEP data and attempting to correlate it with policy changes is misleading. In doing this, no consideration was given to a state's initial baseline or other factors that are related to student achievement. Moreover, in most cases, the state policies that informed the state grades, particularly those with the highest grades, were passed after the 2011 NAEP exam – so the policies were not even in place yet. Just as it's too early to tell in most states whether the policy changes are having a positive impact on student achievement (it takes time to implement, after all), it's also inaccurate (and illogical) to say there's a correlation looking backward.

In short, while it may be a quick, convenient attack to put out information saying that states with low grades are actually doing well in NAEP, or vice-versa, this kind of analysis does nothing to further important policy conversations, which is the main objective of the StudentsFirst annual State Policy Report Card. Rather, it redirects our attention away from issues that matter. Instead, let's focus on whether the policies in place in every state are enabling teachers and school leaders to succeed or empowering parents to make the best decisions for their children. And if they're not, what are we prepared to do about it?

*January 11, 2013 update: clarifications on the definition of R-squared values added to explain statistical relationship of policy GPAs and NAEP scores.

Eric Lerum is the Vice President of National Policy. Follow him on Twitter @ericlerum.

Topics: Education Research, Measuring Student Achievement