I wrote a post questioning the reliability of a study which claimed to show that Wisconsin public workers face a compensation penalty versus what they would make in the private sector. The motivation behind the study that I criticized is that comparing average compensation for public versus private workers is too imprecise to draw conclusions about whether public sector workers are overpaid, underpaid or paid about right. I agree with this. The method of the study was to control for this selection bias by observing how much more or less people tend to make when they have college degrees or not, are black or white, and so forth, and then to combine these adjustments together into a regression model that should hold these factors equal when comparing compensation for public versus private sector workers. It is this method that I criticized.
One technical problem with this approach is that it’s very hard to correctly adjust for these factors. Another, more obvious, one is that if it doesn’t include some factors that are important, it will be unreliable. This second critique was the one I made in my post. This kicked off a fair amount of back-and-forth and other commentary.
Two conservative scholars (Andrew Biggs of AEI, and Jason Richwine of Heritage) have now claimed that while the study in question was flawed, a better modeling method can be employed to show the opposite conclusion – that public sector workers are overpaid versus what they would earn in the private sector.
Manzi is referring to “the human capital model,” which holds that workers are paid according to their skills and personal characteristics, like education and experience. Most scholars—including Andrew, myself, and Heritage’s James Sherk—use it to compare the wages of the public and private sectors. If the public sector still earns more than the private after controlling for a variety of factors, then it is said to be “overpaid” in wages. But because we cannot control for everything, Manzi is saying, the technique is not very useful.
His critique is reasonable enough, but overwrought. The human capital model has been around for three decades, and it is unlikely that economists have failed to uncover important variables that would drastically change its results.
So, the reason I’m wrong is that labor economists have been building human capital models this way for thirty years, and it’s unlikely that any important factors would not be in the models at this point. Got it.
Here are the sentences that immediately follow in Richwine’s post:
Nevertheless, there are other techniques that address most of Manzi’s concerns. An upcoming Heritage Foundation report uses a “fixed effects” approach, which follows the same people over time as they switch between the private and federal sectors. By looking at how the same person’s wage changes when he moves between sectors, a lot of unobservable traits—intelligence, extroversion, etc.—are accounted for.
So, in fact, there are obviously important factors – like, say, how smart you are – that aren’t accounted for by the human capital model that I critiqued. They’re also apparently missing from other standard human capital models, because it requires a different “fixed effects” model to address them.
Biggs, in his post, describes more specifically what their fixed effects analysis does:
To address this, we did a second form of analysis that isn’t subject to Manzi’s objections. Rather than comparing different people at one point in time, this second approach – called a “fixed effects model” – follows the same people over time, specifically as private sector workers found new jobs which could be in the public or private sector. If workers get a bigger raise when they switch from private to federal employment than workers who switch from one private job to another, we can infer that the federal government overpays.
This sounds (and is) reasonable, but do you see the problems? First, the people who leave their jobs, voluntarily or involuntarily, are going to tend to be quite different from the people who stay in their jobs. So generalizing from this group of leavers to the broader population of workers will be hazardous, to put it mildly. Second, even within this group of leavers, while there are some characteristics about people that are fixed, lots of things aren’t. Maybe the nature of the average job in the private sector tends to get them to work harder (or easier) after the switch than the nature of the average job in the other sector. Maybe people tend to switch to government employment at a point in life when they wanted to start to take it easy (or work harder) anyway. Maybe switching to private employment disproportionately involves a move to higher-cost urban (or lower-cost rural) area. And so on, just as in the human capital model.
But this is just quibbling, right? Not really. Biggs and Richwine reference the classic papers on this method by Alan Krueger. If you go back to this work, how does Krueger deal with these potential selection bias problems? He builds a regression model with “occupation, human capital and demographic controls.”
We’re right back where we started, in the sense that we have the same kinds of questions about, for example, unobserved sources of selection bias between the two groups not accounted for in the model.
As it turns out, I’ve built many fixed effects, regression and similar models for the purpose of estimating the effects of various changes in compensation, pricing and other similar behavior drivers. And in contexts in which accuracy can be validated, fixed effects models are almost always better than the kinds of static models that I originally criticized. The reason is, as Biggs says, that some of the unobserved differences between groups are accounted for by them. So, I agree that that for the purpose of estimating a public-private compensation gap, that it is a reasonable belief that fixed effects models are better than static human capital models.
And a motorcycle is better than a bicycle for jumping the Grand Canyon, but I don’t advise you to attempt it with either one. The important accuracy issue here is not relative accuracy, but absolute accuracy. And the methods of assessing statistical significance and the like that are embedded within any of these approaches do not address the critique I am raising, because the critique is that the assumptions of the analysis itself are plausibly very wrong.
Have I then set up a nihilistic position that we can never know anything tolerably well because I can just keep raising these points that might matter, but are not included in the model? In effect, have I put any analyst in the impossible position of proving a negative? Not really. Here’s how you measure the accuracy of a model like this without accepting its internal assumptions: use it to make predictions for future real world experiments, and then see if its predictions are right or not. The formal name for this is falsification testing. This is what’s lacking in all of the referenced arguments in support of these models.
Human capital models, fixed effects models, and other various pattern-finding analyses are useful to help build theories, but a metaphysical debate about the “worth” of various public versus private sector jobs based upon them is fundamentally unproductive. For one thing, it won’t ever end. And as Megan McArdle correctly put it, the practical question in front of us is whether we the taxpayers can procure the public work that we want at a lower cost (or more generally, though less euphoniously, whether we are at the practical optimum on the cost-quality trade-off). If you want an analytical answer to this question, here is what I would do: randomly select some jurisdictions, job classifications or other subsets of public workers, cut their compensation, and then see if we can observe a material reduction in net value of output in these areas versus the control areas. If not, cut deeper. And keep cutting deeper, until we find our indifference point.
There would be obvious limitations to this approach. First, generalizing the results of initial experiments is not straightforward. Second evaluating output is not straightforward for many areas of government. But at a minimum, and unlike the world of endlessly dueling regressions, this would at least let us see the real-world effects of various public compensation levels first-hand, and allow the public to make an informed decision about whether they prefer the net effect of a change to public sector compensation or not.
(Cross-posted to The Corner)