Weitzman: Formalism Run Amok

Harvard economist Martin Weitzman has written a paper on global warming that has gotten a lot of attention in both academia and the blogosphere. It proposes non-traditional methods of economic analysis that lead to the conclusion that we should be willing to bear very large costs today to avoid the risks of potential catastrophic impacts of anthropogenic global warming (AGW). The great advantage of Weitzman’s paper is that it goes to the heart of the issue: addressing the extreme uncertainty in predicting the extent and costs of climate change. In the end, though, I think he fails in this project, and ends up with what is a mathematically sophisticated restatement of the Precautionary Principle. In the very long post that follows below the fold, I try to explain why.

There are three (roughly) sequential analyses that need to be made to predict the AGW-related benefits and costs that would result from a given emissions scenario. First, you need to predict the climate impacts of the emissions. This depends most crucially upon the estimate of climate sensitivity, conventionally defined as the global average temperature impact of a doubling of atmospheric CO2 concentration. Second, you need to estimate the positive and negative cost impacts of the predicted changes in climate. Finally, you have to evaluate the loss in utility created by these costs. This incorporates both assigning utility to various monetary and non-monetary losses at any given point in time, as well as comparing current costs and benefits to future costs and benefits.

Weitzman’s paper makes two critical, logically-separable arguments that are relevant to different steps in this analytical process. First, he attempts to show that the probability distribution (PDF) of possible negative impacts from climate change have greater odds of being disastrous than conventional analyses represent. Second, he incorporates a different method, reflecting prior work that he has done in this area, for discounting future costs and benefits than is used in conventional analysis.

I dealt with the second issue in an earlier post, but the heart of Weitzman’s paper revolves around the first point: the probability of extreme disaster is larger than current models assume. In order to evaluate this assertion, we’ll have to lay some pipe on some of the key concepts that he employs. I find Weitzman’s writing style to be exceptionally clear, but I know that a lot of the technical mathematics he uses in the paper has intimidated many readers. You can skip any of the following three sections that cover material that you already know. If any of these topics are unfamiliar to you, all you need is high school math to follow them.

Laying Pipe I: Probability Distributions

In the context of our discussion here, a probability distribution is basically a list of possible outcomes of some process with associated odds for each outcome (e.g., a 50% chance of no net cost impact of AGW, 30% chance of $1 trillion of net cost, and 20% chance of $3 trillion of net cost). This can be graphically displayed as a histogram. A probability density function (PDF) is a function that represents a probability distribution in terms of integrals, and can be seen informally as a “smoothed out” version of a histogram. Here’s a pictorial illustration of this mathematical concept unrelated to AGW (the red bars are the graphical representation of the estimated frequency of each possible outcome in histogram format, and the line represents the PDF, or the smoothed-out function):

Manzi Image for a Post

Often analysts will use the equation corresponding to the line (the PDF) as the basis for predictions and other analyses. This is for two non-mutually-exclusive reasons. First, in some cases there is a good a priori reason to believe that the distribution should follow a given a functional form. Second, many functional forms have mathematical properties that simplify calculations – in the long era prior to cheap computation, this was often the only practical approach.

I have tried to be studiously neutral in my language about this, but this raises the question of what is “real”: the curve or the bars. That is, is the curve simply a calculation convenience that approximates the “reality” of the data, or is the data simply an approximation to the “reality” of the actual underlying distribution?

The philosophers in the house will immediately recognize this as just one modern incarnation of the ancient debate about Platonic forms. This distinction is relevant to Bayesian reasoning generally, and will be highly relevant to our discussion of Weitzman’s paper specifically.

Laying Pipe II: Fat-Tailed Distributions

In slang terms, the extreme right-hand and left-hand sides of the PDF shown above are often called its “tails”. The funny thing about typical functional forms, such as the normal distribution (often called the bell curve) shown above, is that the curve will get very low as I move further and further to the right or to the left, but it never actually goes to zero.

Often something we care about a lot is the area under this curve. For example, if this is the PDF for the cost impact of AGW, then the area under the curve is equal to the total cost I expect it to create. Now calculating the area under a curve is not an intuitively-obvious mathematical process, but luckily for us, Newton invented the integral calculus for just this purpose. What’s weird about doing it for many PDFs, however, is that the curve goes on forever. No matter how far out I go to the right, for example, there is still more area under the curve yet further out. So why is the area under the curve not infinite?

Once again, we go back to ancient Greece. Zeno’s Paradox says something like the following. Imagine you are 1 foot from a wall and you want to walk over to it. In order to get to the wall, first you have to cross the point that is 1/2 a foot from the wall, then you have to cross point that is 1/4 of a foot from the wall, then you have to cross the point that is 1/8 of a foot from the wall, and so on, forever. Even though these distances get very, very small, they are all greater than zero. So if I have to move through an infinite number of positive distances, and if it takes finite time to travel a finite distance, how can it not take me an infinite amount of time to reach the wall?

The trick here is that the infinite series 1/2 + 1/4 + 1/8 + … sums to a finite number. I won’t prove this to you, but take it on faith for the moment.

Now consider the infinite series 1/2 + 1/3 + 1/4 + 1/5 +… This series declines inexorably towards infinitesimal values closer and closer to zero, just like the Zeno’s paradox series. This series, however, does not sum to a finite number, but to infinity. The terms decline steadily toward zero, but just not fast enough.

In just this way, some PDFs that have infinite tails are like the Zeno’s Paradox series and decline fast enough that the area under the curve will sum to a finite number, while others have tails that also decline toward zero, but just not fast enough, so that the area under the curve will sum to infinity. The latter are called fat-tailed.

The central claim in Weitzman’s paper is that the function that estimates losses from AGW is fat-tailed.

Laying Pipe III: Types of Predictions

Consider three generic types of predictions: deterministic predictions (“If I let go of this pencil, it will fall”), probabilistic predictions that employ a probability distribution (“If I flip this coin, it has a 50% chance of coming up heads and a 50% chance of coming up tails”), and uncertain predictions, for which we can not specify a reliable distribution of probabilities (“There will be a military coup in Pakistan in 2008”). The economists in the house will immediately recognize the distinction between probabilistic and uncertain predictions as, in essence, Knight’s classic distinction between risk and uncertainty.

Strictly speaking, all predictions are uncertain, but as a practical matter we treat different predictions differently based on the observed reliability of the relevant predictive rules used to generate them. Consider the deterministic prediction about what happens if I let go of a pencil. Writers often like to show off by making the point that the statistical laws of quantum mechanics actually state that there is some infinitesimal chance that a lot of particles will bunch up beneath the pencil and push it upward. Without getting into the fine points of quantum mechanics, this is a probabilistic prediction. But there is a yet-further layer of doubt that they typically don’t recognize. What if the laws of quantum mechanics are wrong, so that the PDF that it generates is incorrect? All scientific findings, and hence predictions, float on such a sea of doubt.

This is a version of the fundamental problem of induction, conventionally credited to Hume. He observed that even though every swan ever observed had been white, we can’t be sure that every future swan we see will be white. In one form or another, this has been the central problem for the philosophy of science for most of the past century. This seems hyper-theoretical in the case of dropping a pencil, but it is very practical in the case of predicting the impact of AGW.

No serious person believes that even the physical science projections for temperature impacts of emissions (i.e., how many degrees hotter the world will get if we increase atmospheric carbon dioxide concentration according to some emissions scenario) are deterministic. This is why climate modelers and integrated climate-economics modelers spend so much time developing probability distributions for various outcomes and combining possible outcomes via odds-weighting to develop expected outcomes. When we hear a competent modeling group say “the expected outcome is X,” it doesn’t mean they’ve assumed only the most likely scenario will occur; it does mean, however, they assume their distribution of probabilities is correct (not being idiots, of course they constantly work hard to try to test and improve this distribution of probabilities).

Weitzman shines a spotlight on the issue of whether these analysts have done a good job, especially for the right tail of the distribution of potential losses.

Weitzman’s Key Claims

OK, now that we’re done with the preliminaries, let’s get to Weitzman’s paper.

He emphasizes over and over again the radical uncertainty (as opposed to mere probabilistic risk) in predicting losses created by carbon emissions. I have written repeatedly on this same point. Weitzman sees the issue clearly, and tries to incorporate this uncertainty into his formal analysis. The way he does this is ingenious.

He builds a complex probabilistic functional form that combines the three analytical steps that I described earlier: it incorporates the warming impact of emissions, the resulting costs impacts and the time-adjusted utility loss from these costs. He attempts to demonstrate that the PDF of this functional form is fat-tailed. That is, he argues the sum of estimated loss of utility explodes to some almost-infinite number – limited only by the idea that maximum losses are finitely bounded by the statistical value of all future human life – because the very-low and very-difficult-to-estimate odds of various disaster scenarios created by huge temperature increases are much bigger than conventional models assume. Said yet another way, he argues that the data we have indicates that the best functional form of this model is not a normal distribution, but a different distribution that projects higher odds of huge temperature increases and associated losses way out in the right tail of the PDF well beyond anything that we have ever experienced. What’s so clever about Weitzman’s approach is that this enables him to argue that the expected costs explode, without having to specify exactly what these probabilities are way out in the tail.

In principle, of course, an abstract version of what Weitzman asserts here is always true. As indicated earlier in this post, all PDFs rest on a foundation of uncertainty. In principle, it is always possible for the PDF that we use to make a probabilistic forecast to be wrong (e.g., my coin-flipping probability distribution of 50% chance of heads and 50% chance of tails doesn’t consider the possibility that the coin may land on a edge and stay there). If history is any guide, it’s actually very safe to assume that any PDF is “wrong” in that it is an imperfect representation of reality. A subset of the ways in which any PDF can be wrong is that I think it is thin-tailed but it is really fat-tailed. So, in order for Weitzman to argue that he is making anything other than a trivial statement, he must provide evidence not just that it is possible for the distribution of losses to be fat-tailed, but that a reasonable observer should accept it as likely in the case of AGW. We must move from the world of a debate over functional forms and formalism, to a data-driven argument about actual numbers.

His numerical argument focuses on the first step of the analysis: the PDF of possible climate sensitivities. He argues that the plausible range of possible AGW-caused temperature changes is much wider than has been reflected in economic models, and that this provides confirmation that AGW is an example of a problem for which his analysis is relevant. Said more technically, he claims that we have numerical evidence that the PDF of climate sensitivities is fat-tailed.

Weitzman addresses this numerical analysis at the start of his paper. He invents the label S1 for what he calls “climate sensitivity narrowly defined”. Weitzman cites the most recent IPCC report on global warming (IPCC-AR4) as saying that climate sensitivity has a maximum likelihood estimate of 3C per doubling, is likely to be between 2C and 4.5C, and is very unlikely to be less than 1.5C. This report goes on to say that “Values substantially higher than 4.5C cannot be excluded, but agreement of models with observations is not as good for those values.” He is concerned, correctly in my view, with the possibility of sensitivity above 4.5C “that can not be excluded”.

Weitzman averages the 95% confidence intervals for 22 studies cited in AR4 to provide a rough estimate that there is a 5% chance that S1 is > 7C. He says that “it is apparent that the upper tails of these 22 PDFs tend to be sufficiently long and sufficiently fat that one is allowed to presume from a simplistically-aggregated PDF of these 22 studies” that there is about a 1% chance that S1 is > 10C.

He then proceeds to define something he calls S2. According to Weitzman, S2 is “a more abstract generalized climate-sensitivity-like scaling parameter that includes heat-induced GHG-release feedbacks on the forcing from…weakening of carbon sinks, increased respiration of soil microbes, climate-stressed forests, and so forth.” He cites a long-term study of ice cores that he uses to estimate S2/S1, and concludes from this that the odds that there is a 5% chance that S2 > 11.5C and a 1% chance that S2 > 22.6C. Then he’s off to the races, using S2 as his definition of climate sensitivity throughout the paper.

Given that the IPCC mean estimate of climate sensitivity is 3C, saying that there is a 1% chance that the actual value is above 22.6C fits anybody’s definition of “fat-tailed”. To put this in perspective, the average adult male in the US in about 5’9” tall; if this tail held for the distribution of male height in the US, there would be more than a million 40- foot-tall men walking around America. It would be hard to find a functional form that fits such data that integrates to a finite number.

You really don’t need all of the complicated mathematical formalism that follows in Weitzman’s paper if you accept his distribution of possible climate sensitivities. At a practical level, he’s saying that there is a reliably-quantified 1% chance that the average year-round temperature on Earth will be about 100 degrees Fahrenheit within the foreseeable future. This is about the average summer temperature in Death Valley, California. I think that any rational person who accepted this premise would support pretty aggressive measures to reduce carbon emissions without needing a lot of further analysis.

Evaluation of Weitzman’s Key Claims

Weitzman’s numerical analysis of climate sensitivity is unpersuasive for three reasons of an increasingly fundamental nature.

1. Weitzman Doesn’t Appear to Believe His Own Analysis

Weitzman characterizes his analysis of the PDF of climate sensitivity using the following terms: “back-of-envelope” (page 2), an “extraordinarily crude approximation” (page 2), “ballpark” (page 2), “extremely crude ballpark numerical estimates” (page 5), “simplistic” (page 5), “naively assume” (page 6), “simplistically aggregated” (page 6), “very approximately” (page 7), “some VERY rough idea” (page 7), “without further ado I am just going to assume for the purposes of this simplistic exercise” (page 8), “wildly-uncertain unbelievably crude ballpark estimates” ( page 8), “crude ballpark numbers” (page 9), and “overly simplistic story” (page 9). Weitzman is a well-known economist, so I assume he could pass peer review with a paper titled “My Shopping List”, but at a certain point you just have to ask whether one should be able to publish an estimate in an academic paper with these kinds of qualifications around it.

Now, Weitzman argues in the paper that this kind of uncertainty proves his point, but I think this is exactly wrong, and that this is a conceptual error. He is certainly correct that if there were “no” uncertainty (i.e., while there is always uncertainty, that we could proceed under the assumption that our estimate for the PDF of climate sensitivity were as well-validated as Newton’s Laws of Motion), then there would be no debate – we would simply use the known PDF. But suppose we had no quantitative information about some PDF: no empirical measurements and no a priori theory about the likelihood of any outcome vs. any other. In that case we could say nothing useful about the thin-tailed-ness, fat-tailed-ness or anything else about it. Of course this describes any situation in which we have no reliable, quantitative predictive tool. But the logic of “the greater the uncertainty in predicting a potential disaster, the more we should pay to avoid the risk” is nothing but a pure restatement of the Precautionary Principle. I have no reliable way to build a PDF for the sensitivity of the likelihood of thermonuclear war to various anti-proliferation policies – it has maximum uncertainty. Do I therefore have a mathematical proof that I should spend an infinite amount of money on non-proliferation? It is only when we are in an intermediate zone of some scientifically reliable information about the PDF, but a lot of remaining practical uncertainty around it, that his analysis becomes relevant. The asserted fact that “science tells us” that we have a (roughly, crudely or whatever other CYA you want to put on it) 1% chance of a 100F planet is what would promote the AGW risk above others. If “roughly”, “crudely” or whatever goes all the way to “total uncertainty” we’re right back where we started.

Weitzman is acutely aware of this issue elsewhere in his paper. Here is what he says this on page 32, after about 20 pages of dense formal analysis:

The degree to which the kind of “generalized precautionary principle”…is relevant for a particular application must be decided on a case-by-case “rule of reason” basis. … In the particular application to the economics of climate change, with so obviously limited data and limited experience about the catastrophic reach of climate extremes, to ignore or suppress the significance of rare fat-tail disasters on the order of magnitude of anything remotely resembling a 5% chance of a temperature increase of more than 11C or a 1% chance of a temperature increase of more than 20C is to ignore or to suppress what economical-statistical decision theory is telling us here loudly clearly is potentially the most important part of the analysis.

This is reasonable, but it points to the fact that without some numerical analysis to convince a reasonable person about the likelihood of at least somewhat-extreme outcomes, all we have is a general statement (“the tails might be fat”) that is in concept true for any risk. Unfortunately, Weitzman can’t even convince himself about the relevant numerical analysis.

2. Weitzman’s Estimate for the PDF of S2 is Unsupported

Weitzman has good reason to be unconvinced. His development of the PDF for S1 is not indefensible, but is clearly shaded to support his argument. Far more important, however, is Weitzman’s introduction of concept of an “extended” S2 vs. S1 climate sensitivity. At a qualitative level, this is not a crazy idea. It is not ridiculous to assert that even if we could measure the climate sensitivity with all associated feedbacks today, it may happen that the nature of these feedbacks could change in some future climate state in a manner that no amount of analysis can anticipate.

But how could we try to quantify such a theorized effect? Consider what this means with respect to the Global Climate Models (GCMs) that are used as a key method to estimate climate sensitivity. Without feedbacks, climate sensitivity is on the order of 1C. The whole point of a GCM is to model feedbacks, that is, to be a sufficiently accurate representation of the planet to allow us to apply known physical laws to predict future climate development form first principles. So, unless he is implicitly rejecting the capacity of GCMs to make useful predictions, the idea of S2 requires us to believe that GCMs are useful for some temperature range, but then become useless beyond some other temperature range. This is not a crazy idea either, but it means that we can’t use the GCMs that are the primary basis for temperature impact predictions in order to quantify this idea of S2.

Weitzman then resorts to very long-term climate records, referencing a single ice-bore study. The problem with these, of course, is that there is no “mirror earth” to act as a control group, so in order to estimate the attributable effect of changes in atmospheric CO2 concentrations on temperatures, we have to have some ability to adjust for temperature changes created by all other climate forcings that were occurring simultaneously. In other words, we need some version of a GCM or its functional equivalent in order to translate a set of data that records estimated temperature, atmospheric carbon dioxide concentration and other plausible temperature drivers by time period into an estimate for “long-term” or “true” or “S2” climate sensitivity.

There is no escaping the need for Weitzman to assert that he has conducted new science that better estimates the long-term climate feedbacks than anything which the world climate modeling community has been able to develop if we are to accept his estimates of S2. It’s not like they haven’t been trying to crack this problem, or are not aware of the same data and analyses that Weitzman cites. Current GCMs represent the state of the art method for analyzing this question. In fact, scientists have built exactly such PDFs. The IPCC AR4 reports the PDF of century-long predictions for the impact of long-term emissions scenarios on global temperatures in Figure 10.28. Figure 10.28b shows such a projection for Scenario A1B, often used as a reference case, that has no measurable probability of a temperature increase above about 5C though the end of the century. Once again, they might very well be wrong, but Weitzman is making a far stronger claim – that he has an alternative PDF for possible temperature increase that we should accept as more likely than that produced by the IPCC.

This portion of Weitzman’s paper, upon which all his practical conclusions rest, represents an almost perfect mirror image of the kind of armchair climate science that has led many on the Right to declare that AGW is hoax.

3. There is No Good Reason to Think That the Probability Distribution for Estimates of Climate Sensitivity Fits Any Functional Form

Yet more fundamental is Weitzman’s assumption that there should be any functional form for the distribution of possible climate sensitivities. But why should this be so?

Climate sensitivity is a physical phenomenon. It has a reality independent of our ability to analyze it. At a practical level, it is a quantity like the gravitational constant G (recognizing the complexity that climate sensitivity is conditional on other specifiable parameters). The probability distribution under discussion is a distribution of our error in specifying this value; it is not some inherent property of the phenomenon. (In Bayesian terms, the actual Earth doesn’t represent a “draw” from a super-population of possible Earths with various climate sensitivities.) This probability distribution represents some complex accumulation of systematic and random error in matching the specification of climate models to the physical reality of the planet. There is no good reason to think that some mathematical function that can be integrated has anything to do with this distribution of measurement error.

Therefore, even if we assumed that we had derived a estimated distribution of probabilities for climate sensitivities within some range using physical measurements or reliable models, and if we further assumed that some fat-tailed PDF were a better empirical fit than the normal distribution to this data in this range, there is no good reason to conclude that the fat-tailed PDF would be a better method for projecting the likelihood of extreme climate sensitivities than any other distribution or no distribution at all. Either we could extend the range of the reliable distribution of likelihood of various more extreme climate sensitivities through further experimentation and/or modeling, or we would have to treat these more extreme possibilities as purely uncertain.

Conclusions

Let me start by asserting a weak form of Weitzman’s point that I think is correct. As I have argued in many prior posts and articles, the expected impacts of AGW, and even the risk-adjusted impacts of AGW, do not justify rapid, aggressive reductions in carbon emissions. There is, however, massive uncertainty (rather than mere risk) in our ability to predict the impacts of AGW, and recognizing this reality should lead us to take at least two actions: (1) improve the science to better-specify these extreme risks, and (2) hedge this uncertainty by making “insurance-type” investments today that would provide protection if an extreme AGW scenario ends up happening.

I have tried to distinguish this from the stronger claim that Weitzman makes: that we can now usefully quantify the loss in utility that extreme AGW impacts may cause.

I don’t think that this is a distinction without a difference. Without quantification of risk, we are back in the sea of uncertainty in which we have to make judgments about a myriad of potential catastrophes, ranging from nuclear wars to global pandemics. I have previously written about numerous such risks of qualitatively comparable realism and severity to AGW. Weitzman considers two risks (bioengineering and asteroid collision) in comparison to AGW in his paper. He is forced to say that:

Perhaps it is little more than intuition, but for what it is worth I do not feel that the handful of other conceivable environmental catastrophes are nearly as critical as climate change.

Why is the risk of, say, bioengineering, not nearly as critical? Here is Weitzman’s logic:

…there exists at least some inkling of a[n]…argument making it fundamentally implausible that Frankenfood artificially selected for traits that humans find desirable will compete with or genetically alter the wild types that nature has selected via Darwinian survival-of-the-fittest.

I’m not sure I’m totally convinced that Weitzman has a fully investigated these other risks.

I think that Weitzman has seriously engaged on the AGW issue at a unique level of depth. I admire his tenacity, seriousness and intellectual creativity in refusing to shrink from the issue. But he has tried to cross a bridge too far. We can not (yet) reliably quantify these alternative risks, and must (for now) live with uncertainty.