One of the things I like most about Matt Yglesias’s blog is that he likes to use data. He has an interesting post up in which he makes the point that New York, Boston and DC schools all show much lower performance on standardized tests than the national average, but that if you only consider students eligible for subsidized lunches, then New York and Boston score almost the same as the national average for such students, while DC does much worse. He concludes that “once we control for demographics”, New York and Boston have school systems that are “doing fine”, while DC’s is failing.
Of course, an alternative conclusion is that if one simple adjustment for differing populations seems to lead to such a massive re-ordering of estimated school system performance, maybe there are other population differences that are also important in assessing performance. Maybe, in other words, the low-income populations of Boston, New York and DC vary from one another, and these differing test scores are not caused by differences in school practices, but instead by these population differences. We might have to start adding other “controls” into the analysis. This road eventually leads to the kind of sophisticated hierarchical modeling which probably reached its apotheosis almost 20 years ago with Chubb and Moe’s seminal analysis of school performance. This is, to put it mildly, quite a bit more involved than a single-factor adjustment using three data points. Even this analysis has been subjected to sustained methodological criticism. The phenomenon is too complex for us to determine causality with such techniques. This is why education research increasingly uses controlled experiments, analogous to clinical drug trials, to determine effectiveness of educational methods.
This post seems to me to be on solid ground when cautioning against drawing facile conclusions (that usually confirm pre-existing beliefs) from data, but is less convincing in establishing the relative performance of New York, Boston and DC public schools.