Models, Models Everywhere And No One Stops to Think
Lots of bloggers have attacked my cost-benefit analysis of Waxman-Markey. The current line of attack seems to be that cost-benefit analysis is just the wrong way to think about this problem. There are, as usual, various intertwined logics for why this might be so. They all seem to center around the idea, however, that while climate science can make reasonably reliable predictions a century or two out, economic projections over this time scale are basically worthless.
While this exaggerates the reliability of climate models, I agree that they are more useful than very long-term economics models. But let’s assume arguendo that my critics are correct in the extreme, and therefore we have no ability to translate a climate forecast into an estimate of economic damages. Then, they say, we know some kind of catastrophe is coming, and it’s our duty to head it off.
There are at least a couple of huge problems with this argument.
First, all estimates of the climate impact of human-induced CO2 emissions rely on a long-term emissions forecast, which in turn relies on (i) forecasts of population growth, (ii) forecasts of economic growth per capita, (iii) forecasts for the energy intensity of economic output per capita, and (iv) technology forecasts for the carbon-intensity of each unit of economic output. That is, we can’t make a long-range climate forecast in the absence of long-range economic forecasts.
The differences in emissions across economic scenarios are not trivial. They are the basis for the UN IPCC’s scenario-based forecasting approach that leads to a difference in estimated temperature impacts by 2100 that are about 3 times larger for the highest-emissions plausible scenario versus the lowest emissions plausible scenario.
Second, if we assume that we have literally no technical capability to translate a temperature forecast to a forecast of damages, then we are forced to rely on intuition. A 3C increase by 2100 in temperature sure doesn’t sound so awful to me. Want to argue that I’m misguided, and here is this long list of awful things that will happen? We’re right back to estimating damages, and I’ll just point to what the IPCC, CBO, EPA and so forth estimate when they try to do comprehensive estimates of net impacts.
Look, I get the point that trying to forecast what our GDP will be in 2136 to within a few percent is ridiculous. It is. Further, I get the point that indefinite accumulations of CO2 in the atmosphere will eventually become very damaging. I also get the point that there is some risk that we might reach that point sooner than we think. These are all true statements.
But if they are to inform rational decision-making, they also require quantitative, not rhetorical, interpretation. When do we expect that CO2 will be how big a problem? How big is the risk that it will be worse than that? And so on. While we may legitimately criticize a specific cost-benefit analysis or methodology, it seems hard to imagine a rational approach to such decision-making that doesn’t try to envision the future world under alternative policy assumptions and assert a preference.
So, show us your alternative forecasts and provide sources and methodologies. If you choose to respond with a bunch of words describing how awful things could look, or wave your hands, you’re still making a forecast – it’s just not of any real use. False precision is one way to evidence the problem of a unwarranted assertion of certainty, but so is simply asserting that we know the damages that we should expect within some finite time outweigh the costs of some proposed program to avert them, without providing the evidence.
The forecasts of every responsible body, as I have gone to such boring lengths to show over many articles, actually make it very hard to justify any so-far proposed carbon pricing or rationing schemes based on the benefits we should expect them to produce over the next roughly 100 years. (I think the fact that my critics are mostly attacking the idea of cost-benefit analysis itself, rather than my quantitative arguments, is pretty good evidence of this.) Eventually, of course, if we assume linear extrapolation of current trends, CO2 will become a deadly problem; but I think if you’re honest, you’ll find yourself having to justify these programs based on things that you project will happen in the 22nd century and beyond. Who’s being arrogant about predicting the future now?
I have very little idea what the technological, social and political bases for the human economy will be hundreds of years from now, and think that trying to manage such a problem by changing carbon pricing today is foolish in the extreme.
Jim,
I applaud your persistence in arguing your point of view on climate change through many angles, continuously engaging your critics responsibly through fair argument.
It seems that there are two potential policy mistakes to be made: acting too late and acting too soon. Each can have severe consequences. In the first, we destroy our planet. In the second, we damage the world economy unnecessarily, increasing poverty, particularly in the developing world, when we didn’t need to. For instance, consider a scenario in which a potential technology that economically addresses climate change is impeded because its development requires carbon emission.
Is it accurate to say that while most people are focused on the first mistake and not the second, you’re more concerned with the second mistake and less with the first?
— Ben · Jul 7, 03:23 PM · #
Ben:
Thanks.
Yes, that’s basically fair. I would say that the discussion as of a couple of years ago was one in which the Right was mostly lost in the fever swamps of debating whether or not AGW is real, so that advocates of carbon pricing could get away with not really having to engage on the balance between various risks and costs. Now, the debate has become a lot more practical.
— Jim Manzi · Jul 7, 03:31 PM · #
This is a great post and I agree with just about everything except the conclusion :-).
Have you read the book The Logic Of Failure: Recognizing And Avoiding Error In Complex Situations by Dietrich Dorner? It’s a great book about how people (mis)manage complex systems in predictable ways. There are many great lessons in it, but I’ll just cherry pick one that I think is relevant here. It’s that in analyzing complex systems, people often incorrectly focus on the value of a quantity rather than on its rate of change. This can lead them to delay action for too long, and ultimately require a large intervention to correct the problem. In a complex system, such a large intervention can have dramatic unintended consequences. One of the lessons I take from the book is to manage complex systems by focusing on the rate of change of quantities, and making small interventions to keep them in a desired range, rather than waiting for the problem to manifest itself and making a large intervention.
So given the uncertainty that you point out, it seems prudent to me to follow what is known to be a better approach.
— Chris · Jul 7, 03:45 PM · #
Chris:
Thanks.
I have not read the book. I take the asserted management heurtistic as very reasonable-sounding advice, as a very general proposition. Of course, it’s easy to imagine lots of real and hypotheitcal situations in which this is not true.
A few quick poiints, though:
1. It’s very unclear that global human society is a managed system at all – it’s probably more accurate to think of it as evolving.
2. There are many opposed heuristics here. One obvious one is the idea of availability bias – roughly we tend to radically under-estimate the odds of a catastrophe arising from any one source, until we start to focus on it, at whihc point we tend to radically over-estimate the odds.
3. The same point about quantification applies to the first derivative: what rate of increase in CO2 is “safe”? How do you know?
— Jim Manzi · Jul 7, 03:56 PM · #
On 1., that’s what this whole debate is about, no? Are we evolving into a bad area, and can/should we try to manage our way out of it?
On 2., that’s also discussed in Dorner’s book. As Rumsfeld famously said, “We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.” I don’t know of any obvious solution to this, except to do as much meta-thinking as possible and try not to get locked into any one intellectual box.
On 3., I would first look at the second derivative, which is currently positive, no? (For global emissions when time-averaged to smooth out fluctuations.) So my first answer would be that we (meaning the world) should at least make the second derivative zero, if we can. Beyond that, it’s of course very complicated. CBA has a role of course, as does a recognition of the uncertainties in the science, the economics, and in technological developments, as does a recognition of the temporal lag between emissions and climate response. At the very least, the prudent thing for today would seem to lay the groundwork for potential (near) future reductions, if we learn that it really is important to limit CO2 concentrations to something like 540 ppm (to pick just one number). As to the ideal emissions profile over the next 30 years? No one really knows that, of course.
— Chris · Jul 7, 04:55 PM · #
Thanks for continuing to engage your critics and supporters alike.
I am not going to take the position that all “economic projections over centennial time scale[s] are basically worthless”, but rather that I find the specific economic models I’ve seen you argue in favor of seem to be profoundly flawed.
Take the DICE model, for example. Do you think that it’s even remotely plausible to assign a loss of GDP of only half [50%] to temperature increase of 19°C? If so, can you give us a rough idea of how 50% GDP can be maintained on a planet 19°C warmer than now?
Thanks,
— thingsbreak · Jul 7, 07:36 PM · #
thingsbreak:
Thanks.
I think all models break down when forced to extrapolate way beyond historical experience in any dimension. Time scale is one of these, and temperature is another. I’ve said several times that if Weitzman’s 1% confidence interval of ~22C were defensible, I’d be in favor of aggresive carbon restrictions, as this would be so clearly disastrous.
— Jim Manzi · Jul 7, 09:00 PM · #
I think Ben and Chris make some excellent points, and it is true to some degree that we are now in a place where we can debate the policy not the science. However, the US politicians are still woefully weak on the science details. Back to the policy questions though; haven’t you given the game away Jim with your response to Ben? Once you acknowledge the trade-off is about deciding when to act, how is it rational to prefer acting too late rather than too soon? Certain aspects of the policy problem seem to clearly warrant erring on the early action side e.g. irreversibility, the extreme scenarios, response time lags etc etc How do you respond to this?
— Cameron McDonald · Jul 7, 09:32 PM · #
Jim, there are those that want to throw out models entirely. But there are others who are argue two more specific points:
1. Economic model projections historically overestimate the costs of new regulation and underestimate the benefits.
2. The IPCC climate models likely underestimate the speed and impact of climate changes — that’s the trend in the science since 2006.
So the damages are bigger and closer than you’re accounting for, and the costs of addressing the problem are smaller than you account for.
Also, it’s worth noting that the longer you delay dealing with the problem, the more it will cost, for the simple reason that we’re constantly replacing infractructure (housing, etc.), and it will last 30 years — if we build it inefficiently we lock in high emissions that we’ll probably end up having to tear down.
— David Roberts · Jul 7, 09:33 PM · #
Meant to add: you should read, and seriously grapple with, this paper:
http://www.mitpressjournals.org/doi/abs/10.1162/glep.2006.6.4.56
(I can email you a copy if you like.)
— David Roberts · Jul 7, 09:37 PM · #
“I think all models break down when forced to extrapolate way beyond historical experience in any dimension. Time scale is one of these, and temperature is another.”
So what about the DICE and similar models do you find to be plausible, given that they admittedly don’t capture realistic costs at the extreme high end of warming? Stern claims that this is due to their being basically quadritic fits through what we believe costs of emissions are today through projected costs to 2°C- which is gobsmacking when talking about temperatures even far below 19°C. Put differently, why do you accept the DICE type models on warming of ~6°C?
Cheers,
— thingsbreak · Jul 7, 09:44 PM · #
Cameron:
Because a subset of “too early” is “responded when events would have obiviated the need for a response as time unfolded. AGW losses big enough to justify coercive emissions mitigation might or might not ever arise (ie, in reference to my post and comments, linear trends might very well not hold true in the future).
— Jim Manzi · Jul 8, 12:24 AM · #
thingsbreak:
1. Reasonable-sounding assumptions (as compared to any known competitor)
2. Alignment with other similar model results, and the ability to explain variances simply and intuitively (e.g., vs. Stern) in a manner that makes a convincing case for its superiority.
3. My ability to replicate results very approximately using back-of-envelope math.
4. General recognition by the scholarly community.
As per my post, and I don;t mean this as a wiseacre comment, I’m happy to be presented with superior alternative methods.
— Jim Manzi · Jul 8, 12:30 AM · #
David:
I’ve given my take on this issue in a prior post on prediction model bias: http://theamericanscene.com/2009/06/23/prediction-model-bias
That’s very hard to accept given that we’ve had no increase in the most fundamental metric of global warming – avergae temperature – in about 10 years. I’m constantly having to point out to people on the Right that there is a lot of noise-to-signal in all of these measurements of causal impact, and that 10 years is too short a measurement period to falsify a global climate model. In the same way, the three years since 2006 is way, way too short, IMHO.
I’d love to get a copy of the paper.
Thanks,
Jim
— Jim Manzi · Jul 8, 12:38 AM · #
Jim, you can read the paper here
— Kristoffer V. Sargent · Jul 8, 01:15 AM · #
JA:
Thanks very much.
— Jim Manzi · Jul 8, 02:03 AM · #
“That’s very hard to accept given that we’ve had no increase in the most fundamental metric of global warming – avergae temperature – in about 10 years.”
Well, that’s simply not true and you can verify this yourself by plotting the data for the last 120 months of GISTEMP, HadCRUT, UAH, RSS, etc. The linear trends are all positive.* What’s that I hear someone (not necessarily Jim) say? ‘But if you change the period to 11, 9, X years, etc. the trend becomes flat or cooling’? Exactly, I reply. Such short periods for temp are not meaningful when discussing climatological trends (e.g. here or here).
“I’m constantly having to point out to people on the Right that there is a lot of noise-to-signal in all of these measurements of causal impact, and that 10 years is too short a measurement period to falsify a global climate model. In the same way, the three years since 2006 is way, way too short, IMHO.”
You’re comparing apples to oranges there. David’s talking about our understanding of some of the science since the AR4, not the temperature or any other observational trend since 2006. For example, the AR4 is explicitly too conservative in dealing with dynamic ice sheet collapse in terms of SLR projection, while paleoclimatic studies since the cutoff window have shown that SLR on the order of 1m or more by end of century are possible. This issue is not contingent upon any change in the trend of SLR due to melt water/ice sheet decay over the last 3 years.
I’ll see about some alternative econ models as well.
*[The “10 years no warming” meme depended on it actually being within 10 years of the ENSO-boosted ’98 record. The problem with cherry picking a period with no fixed dates is that the window moves on you. Anyone who wants to keep this canard alive has to explicitly name 1998 which is a little too transparent amongst those passing familiar with the subject, or move to the next bump to anchor to, which probably is likewise too obvious to “stick”.]
Cheers,
— thingsbreak · Jul 8, 03:09 AM · #
Jim-
This is not exactly the case. We can use economic forecasts to figure out ranges for the amount of carbon we’re putting in the atmosphere, but the key variable for the environmental forecasts is how much carbon over how long a period of time. And it’s possible to generate those numbers outside of specific economic forecasts – say, by plugging in the numbers from various carbon treaties, or doing simple linear extrapolation on historical carbon emissions. It’s not the case that bad economic forecasting invalidates the environmental forecasting.
Well, no, because even if we can’t forecast the economic and political fallout from global warming, we can do a reasonably good job of forecasting the environmental impact (e.g. Lynas’ Six Degrees ), and make our judgements on that basis.
Two problems with this approach, Jim. First, it’s simply NOT the case that we can make rational decisions without quantitative interpretation. We make legitimate, meaningful policy choices ALL THE TIME without the kind of quantitative analysis you’re talking about. We go to war (for good or ill) based not on cost/benefit analysis, but on more fundamental questions of values and morality. We make decisions about public health policy based not on the expected economic value of each human life saved, but with the implicit understanding that life is, in and of itself, a good thing, even if it’s not economically quantified.
Somebody in one of these threads asked you, IIRC, if you’d completely and utterly destroy a small, but meaningful percentage of the planet – 2 percent or so – for a big enough economic payoff. I don’t think you ever explicitly answered that, but according to the cost/benefit analysis you’re preaching here, it seems like there’s no question that you’d do so, if enough economic value could be generated. My question to you, Jim, is, do you understand that for some people there would not be a payoff big enough to do such a thing? And that such a decision is at least as legitimate as what you’re proposing here? What if I could show you that re-instituting slavery would create a vast surplus of economic value, such that even the enslaved would ultimately have a higher standard of living – would CBA convince you that we need to bring back slavery?
Second, you make the argument that your critics are attacking the idea of cost/benefit itself. This is not, strictly speaking, true – while I certainly believe there are real problems with CBA w/r/t this particular question, the idea of CBA is certainly legitimate. However, it greatly depends on accurate data – and you’ve all but admitted you don’t have that data. You point to reams of analysis that various groups have done on potential economic impacts from global warming, but you freely admit there are major factors – such as ocean acidification – that your analysis doesn’t even cover. And unlike in computer science, where upper bounds and absolute worst case performance can be reliably calculated, the probability distributions you’re dealing with here ultimately rest on the same rhetorical hand-waving you disparage elsewhere – the DICE assumptions are “reasonable sounding” to you, perhaps, but not to me. How is that fundamentally any different from a temperature rise of 3 degrees C sounding reasonable to you, but not to me?
In the end, perhaps this is our major stumbling block of comprehension, Jim – I’m not trying to manage the “human economy”, I’m trying to prevent irreversible damage to the planet. The former is irrevocably dependent on the latter, but the latter has meaning and value regardless of what happens to the former. That being the case, the decision here is close to a no-brainer for me, regardless of how foolish that might seem to you.
— (some other) Chris · Jul 8, 07:31 AM · #
thingsbreak:
Yes, I see that yours is the more natural reading of David’s point about what has changed since 2006. I’ve addressed this point elsewhere, but in quick summary, the reason we have the IPCC process is to integrate many diverse technical analyses into a consensus viewpoint. If I were competent to do this on my own, I wouldn;t have to rely on it, but I do.
— Jim Manzi · Jul 8, 01:20 PM · #
>>1. Economic model projections historically overestimate the costs of new regulation and underestimate the benefits.
>I’ve given my take on this issue in a prior post on prediction model bias: http://theamericanscene.com/2009/06/23/prediction-model-bias
In response to both David and that prior post, in particular:
“if one could actually observe a reasonably consistent over-estimate in a provably-relevant reference class of prior predictions, this would lead a competent forecaster to make a transparent “topside” adjustment to get a best-available forecast.”
Pollution-permit auction markets seem to me to be the best area to look at in answering this question. My impression is that the cost of those permits is almost always wildly overestimated prior to the systems’ implementation. IOW, the cost of preventing or ameliorating the pollution is less than predicted (especially by industry), so get-out-of-jail-free cards don’t end up costing all that much.
I’ve done some digging but have not found any solid research on this. Would love to hear from others.
— Steve Roth · Jul 8, 05:29 PM · #
This whole conversation is way above my pay grade, but I thought I read somewhere that the El Nino spike is necessary to claim that there has been global cooling over the past decade, but that you can say there has been no significant heating over the past ten years without relying on such selective reference points. I hope that someone can clarify, or tell me what I misunderstood.
— Blar · Jul 8, 10:15 PM · #