In early 2009, the United States was engaged in an intense public debate over a proposed $800 billion stimulus bill designed to boost economic activity through government borrowing and spending. James Buchanan, Edward Prescott, Vernon Smith, and Gary Becker, all Nobel laureates in economics, argued that while the stimulus might be an important emergency measure, it would fail to improve economic performance. Nobel laureates Paul Krugman and Joseph Stiglitz, on the other hand, argued that the stimulus would improve the economy and indeed that it should be bigger. Fierce debates can be found in frontier areas of all the sciences, of course, but this was as if, on the night before the Apollo moon launch, half of the world’s Nobel laureates in physics were asserting that rockets couldn’t reach the moon and the other half were saying that they could. Prior to the launch of the stimulus program, the only thing that anyone could conclude with high confidence was that several Nobelists would be wrong about it.
But the situation was even worse: it was clear that we wouldn’t know which economists were right even after the fact. Suppose that on February 1, 2009, Famous Economist X had predicted: “In two years, unemployment will be about 8 percent if we pass the stimulus bill, but about 10 percent if we don’t.” What do you think would happen when 2011 rolled around and unemployment was still at 10 percent, despite the passage of the bill? It’s a safe bet that Professor X would say something like: “Yes, but other conditions deteriorated faster than anticipated, so if we hadn’t passed the stimulus bill, unemployment would have been more like 12 percent. So I was right: the bill reduced unemployment by about 2 percent.”
Another way of putting the problem is that we have no reliable way to measure counterfactuals—that is, to know what would have happened had we not executed some policy—because so many other factors influence the outcome. This seemingly narrow problem is central to our continuing inability to transform social sciences into actual sciences. Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs.
So begins a modified excerpt in the current City Journal from my upcoming book.
As if on cue, the Wall Street Journal has an article about the debate concerning stimulus. The WSJ article opens:
Eighteen months after President Barack Obama administered a massive dose of spending increases and tax cuts to a weak economy, a brawl has broken out among economists and politicians about whether fiscal-stimulus medicine is curing the illness or making it worse.
But today, neither side can say with certainty whether the latest stimulus worked, because nobody knows what would have happened in its absence.
The Obama administration is stocked with heirs of Mr. Keynes, including academics Christina Romer and Mr. Summers. Ms. Romer famously projected in January 2009 that without government support, the unemployment rate would reach 9%, but with support the government could keep it under 8%. It’s 9.5% today.
Some Obama administration officials privately acknowledge they set job-creation expectations too high. The economy, they argue, was in fact sicker in 2009 than they and most others realized at the time. But they insist unemployment would have been worse without the stimulus.
My article goes on to argue that very few of the many potentially useful non-obvious predictions made so confidently by social scientists in fields ranging from economics to criminology to education can survive rigorous controlled experiments, and tries to describe how the experimental revolution is now coming to this discipline. I also try to outline a few lessons I think we have learned from social science experimentation so far. Much more on this will be coming later.