Krugman on Climate III: Resorting to Pascal’s Wager

In the two prior posts in this series, I argued that Paul Krugman has presented a flawed cost – benefit analysis for his proposals to reduce greenhouse gas emissions, and that his linked proposal to use carbon tariffs to push recalcitrant developing economies to go along is misguided. In this final post of the series, I will argue that his consideration of the role of uncertainty in determining our course of action is also incorrect, because he fails to provide any principle by which we could establish any limit to much we should spend to reduce a risk which can never be eliminated entirely.

Like all serious cases for emissions mitigation to avoid climate change damages, Krugman eventually comes to the argument that the possible, rather than the expected, consequences of AGW are so severe that they justify almost any cost. In describing why a simple comparison of expected to costs to expected benefits over the next century is an inadequate consideration of the economic trade-offs involved, he (correctly, in my view) focuses on the question of uncertainty.

Finally and most important is the matter of uncertainty. We’re uncertain about the magnitude of climate change, which is inevitable, because we’re talking about reaching levels of carbon dioxide in the atmosphere not seen in millions of years. The recent doubling of many modelers’ predictions for 2100 is itself an illustration of the scope of that uncertainty; who knows what revisions may occur in the years ahead. Beyond that, nobody really knows how much damage would result from temperature rises of the kind now considered likely.

You might think that this uncertainty weakens the case for action, but it actually strengthens it. As Harvard’s Martin Weitzman has argued in several influential papers, if there is a significant chance of utter catastrophe, that chance — rather than what is most likely to happen — should dominate cost-benefit calculations. And utter catastrophe does look like a realistic possibility, even if it is not the most likely outcome.

Weitzman argues — and I agree — that this risk of catastrophe, rather than the details of cost-benefit calculations, makes the most powerful case for strong climate policy. Current projections of global warming in the absence of action are just too close to the kinds of numbers associated with doomsday scenarios. It would be irresponsible — it’s tempting to say criminally irresponsible — not to step back from what could all too easily turn out to be the edge of a cliff.

Krugman is also correct, in my view, that uncertainty in our forecasts strengthens the case for action.

The stronger form of the argument from uncertainty is not only that it is possible that the true probability distribution of potential levels of warming is actually much worse than believed by the IPCC, but that a reasonable observer should accept it as likely that this is the case. As Krugman indicates, the sophisticated version of this argument has been presented by Weitzman. Weitzman’s reasoning on this topic is subtle and technically ingenious. In my view, it is the strongest existing argument that runs exactly counter to mine. (You can see a slightly earlier version of his paper, and my lengthy response here, along with links to the underlying source documents.) In very short form, Weitzman’s central claim is that the probability distribution of potential losses from global warming is “fat-tailed”, or includes high enough odds of very large amounts of warming (20°C or more) to justify taking expensive action now to avoid these low probability / high severity risks.

The big problem with his argument, of course, is that the IPCC has already developed probability distributions for potential warming that include no measurable probability for warming anywhere near this level for any considered scenario. That is, the best available estimates for these probability distributions are not fat-tailed in the sense that Weitzman means it. Therefore, professor Weitzman is forced to do his own armchair climate science, and argue (as he does explicitly in his paper) that he has developed a superior probability distribution for expected levels of warming than the ones the world climate-modeling community has developed and published. And his proposed alternative probability distribution is radically more aggressive than anything you will find in any IPCC Assessment Report – professor Weitzman argues, in effect, that there is a one percent chance of temperature increase greater than 20°C over the next century, while even the scale on the charts that display the relevant IPCC probability distributions only go up to 8°C. It is not credible to accept Professor Weitzman’s armchair climate science in place of the IPCC’s.

The only real argument for rapid, aggressive emissions abatement, then, boils down to the weaker form of the argument from uncertainty: the point that we must always retain residual doubt about any prediction – or in plain English, that you can’t prove a negative. The problem with using this rationale to justify large economic costs can be illustrated by trying to find a non-arbitrary stopping condition for emissions limitations. Any level of emissions imposes some risk. Unless you advocate confiscating all cars and shutting down every coal-fired power plant on earth literally tomorrow morning, you are accepting some danger of catastrophic warming. You must make some decision about what level of risk is acceptable versus the costs of avoiding this risk. Once we leave the world of odds and handicapping and enter the world of the Precautionary Principle – the Pascal’s Wager-like argument that the downside risks of climate change are so severe that we should bear almost any cost to avoid this risk, no matter how small – there is really no principled stopping point derivable from our understanding of this threat.

Think about this quantitatively for a moment. Suspend disbelief about the real world politics for a moment, and assume that we could have a perfectly implemented global carbon tax. If we introduced a tax high enough to keep atmospheric carbon concentration to no more than 420 ppm (assuming we could get the whole world to go along), we would expect, using to the Nordhaus analysis as a reference point, to spend about $14 trillion more than the benefits that we would achieve in the expected case. To put that in context, that is on the order of the annual GDP of the United States of America. That’s a heck of an insurance premium for an event so low-probability that it is literally outside of a probability distribution. Former Vice President Al Gore has a more aggressive proposal that if implemented through an optimal carbon tax (again, assuming we can get the whole word to go along) would cost more like $21 trillion in excess of benefits in the expected case. Of course, this wouldn’t eliminate all uncertainty, and I can find credentialed scientists who say we need to reduce emissions even faster. Any level of emissions poses some risk. Without the recognition that the costs we would pay to avoid this risk have some value, we would be chasing an endlessly receding horizon of zero risk.

So then what should we do? At some intuitive level, it is clear that rational doubt about our probability distribution of forecasts for climate change over a century should be greater than our doubt our forecasts for whether we will get very close to 500 heads if we flip a fair quarter 1,000 times. This is true uncertainty, rather than mere risk, and ought to be incorporated into our decision somehow. But if we can’t translate this doubt into an alternative probability distribution that we should accept as our best available estimate, and if we can’t simply accept “whatever it takes” as a rational decision logic for determining emissions limits, then how can we use this intuition to weigh the uncertainty-based fears of climate change damage rationally? The only way I can think of is to attempt to find other risks that we believe present potential unquantifiable dangers that are of intuitively comparable realism and severity to that of outside-of-distribution climate change, and compare our economic expenditure against each.

Unfortunately for humanity, we face many dimly-understood dangers. Professor Weitzman explicitly considers an asteroid impact and bioengineering technology gone haywire. It is straightforward to identify others. A regional nuclear war in central Asia kicking off massive global climate change (in addition to its horrific direct effects), a global pandemic triggered by a modified version of the HIV or Avian Flu virus, or a rogue state weaponizing genetic-engineering technology are all other obvious examples. Any of these could kill hundreds of millions to billions of people. Further, specialists often worry about the existential risks of new technologies spinning out of control: biosphere-consuming nanotechnology and supercomputers that can replace humans are both topics of intense speculation. In the extreme, leading physicists publicly speculated that starting the Large Hadron Collider in 2009 might create so-called strangelets that would annihilate the world, much as Edward Teller calculated that the initial nuclear explosion at Trinity in 1945 might ignite the world’s atmosphere. This list could be made almost arbitrarily long.

Consider the comparison of a few of these dangers to that of outside-of-distribution climate change dangers. The consensus scientific estimate is that there is a 1-in-10,000 chance of an asteroid large enough to kill a large fraction of the world’s population impacting the earth in the next 100 years. That is, we face a 0.01% chance of sudden death of most people in the world, likely followed by massive climate change on the scale of that which killed off the non-avian dinosaurs, which seems reasonably comparable to outside-of-distribution climate change. Or consider that Professor Weitzman argues that we can distinguish between unquantifiable extreme climate change risk and unquantifiable dangers from runaway genetic crop modification because “there exists at least some inkling of a prior argument making it fundamentally implausible that Frankenfood artificially selected for traits that humans and desirable will compete with or genetically alter the wild types that nature has selected via Darwinian survival of the fittest.” That does not seem exactly definitive. What is the realism of a limited nuclear war over the next century – with plausible scenarios ranging from Pakistan losing control of its nuclear arsenal and inducing a limited nuclear exchange with India, to a war between a nuclearized Iran and Israel?

The U.S. government currently spends about four million dollars per year on asteroid detection (in spite of an estimate that one billion dollars per year spent on detection plus interdiction would be sufficient to reduce the probability of impact by 90 percent). We continue to exploit genetic engineering to improve crop yields because, much like avoiding burning fossil fuels, the human costs of stopping this would be immediate and substantial. We detonated the atomic bomb at Trinity, and fired up in the Large Hadron Collider, because the perceived benefits anticipated from both were significant. We are not willing to engage in an unlimited level of military action to prevent nuclear proliferation, despite the risks proliferation creates, since we must weigh risk against risk.

In the face of massive uncertainty, hedging your bets and keeping your options open is almost always the right strategy. Money and technology are the raw materials for options to deal with physical dangers. A healthy society is constantly scanning the horizon for threats and developing contingency plans to meet them, but the loss of economic and technological development that would be required to eliminate all theorized climate change risk (or all risk from genetic and computational technologies or, for that matter, all risk from killer asteroids) would cripple our ability to deal with virtually every other foreseeable and unforeseeable risk, not to mention our ability to lead productive and interesting lives in the meantime.

We can be confident that humanity will face many difficulties in the upcoming century, as it has in every century. We just don’t know which ones they will be. This implies that the correct grand strategy for meeting them is to maximize total technical capabilities in the context of a market-oriented economy that can integrate highly unstructured information, and, most importantly, to maintain a democratic political culture that can face facts and respond to threats as they develop.