I’m glad to see that Megan McArdle is confronting the reality that Waxman-Markey is very unlikely to work, either in the sense of providing benefits greater than costs, or in the sense of achieving anything like its stated goals, even without regard to cost.
I’m also glad to see that Ezra Klein is explicit about his acceptance that climate change is expected to have extremely limited effects on the United States for at least the next hundred years. I figure that ought to be pretty important when debating the proper policies for the government of the United States. On the other hand, we continue to disagree about the financial efficiency of the foreign aid program defined by transforming the energy sector of the American economy in order to very slightly ameliorate a predicted problem that might affect people who might live in low-lying equatorial regions of the world decades from now.
Ryan Avent, on the other hand, refuses to see the light. If you want the background to this post, there is Ryan, me, Ryan, me, and now this one from Ryan. This will be my last turn at bat on this one.
In his latest reply, Ryan starts with this:
One thing that recurs in Manzi’s writing on climate change issues is an extreme devotion to the infallibility of models.
Here is what I said in what I’m pretty sure is the first thing I ever wrote for publication on climate change:
Over the past several decades, teams in multiple countries have launched ongoing projects to develop large computer models that simulate the behavior of the global climate in order to account for feedback effects. While these models are complex, they are still extremely simplistic as compared with the actual phenomenon of global climate. Models have successfully replicated historical climates, but no model has ever demonstrated that it can accurately predict the climate impact of CO2 emissions over a period of many years or decades.
Climate models generate useful projections for us to consider, but the reality is that nobody knows with meaningful precision how much warming we will experience under any emissions scenario. Global warming is a real risk, but its impact over the next century could plausibly range from negligible to severe.
In fact, I think it is fair to say that the idea of uncertainty in our predictions has been central to my entire argument on climate change from the beginning.
Ryan goes on to criticize my point that if we really have a track record of consistent prediction error, competent modelers should be able to incorporate that information into forward predictions:
Just because Doug Elmendorf can probably say that he’s going to overestimate the costing of Waxman-Markey doesn’t mean that he can say where and by how much, with the level of methodological surety necessary to allow him to include an adjustment of some sort. Manzi seems to convey the idea in his work that such a state of affairs ought to render a piece of information unusable, or irrelevant. But that’s a strange way to approach a problem — any problem.
Sorry, that’s not a strange way to approach prediction at all. If I make a string of 100 predictions using method X for how many runs the Mets will score in their next game, and my prediction is always exactly one run low, it would likely be intelligent to modify my method to be whatever X produces plus one run. The complexity, of course, is that I’m not always one run low, but I’m one run low on average. More fundamental is the reference class problem in such an adjustment: is it that my model tends not to work as well for home games, or is it games on weekends, or is it games in which Y pitches and so on. This is the substance of the nerdy debates about predictive modeling.
If you look across his posts on this topic, Ryan is trying to argue three things simultaneously: (1) we should take this cost forecast as important because the CBO produced it; (2) there is an unambiguous track record of environmental cost over-prediction which is simple enough that as consumers of this prediction we can reliably reduce the CBO forecast by some non-zero amount; and (3) the CBO failed to use this information in producing their forecast. Sorry, I’m not buying. I agree that there is lots of uncertainty around any such forecast, not that there is a sufficiently reliable forecast bias that we should treat this as an inherent over-estimate.
Ryan goes on to say:
You see this with his discussion of cost-benefit analysis in general. People say to Manzi, well, what if the predictions are off? Manzi replies, but of course, the modelers have thought of this and have built probability distributions to include all these difference possibilities, so when you ask “what if they’re off” you’re really asking “what if something happens that’s outside the distribution,” which means you’re just invoking the precautionary principle, which is daft, etc.
This paragraph is pretty accurate, other than the last part. It’s not true that I’ve equated worrying about the danger of “something outside the probability distribution” with “the precautionary principle”. I have consistently characterized the idea that climate change damages could be worse than the worst-case scenarios projected by the IPCC as a real danger, but argued that we must set this in the context of other dangers, and consider the costs of trying to forestall it in comparison to benefits. The precautionary principle, on the other hand, goes far beyond recognizing the realistic possibility of such an outcome, and instead proceeds all the way to the fallacy of the one-sided bet: the idea that we should bear almost any cost in return for almost any reduction in the expectation of such an outcome.