I blame myself for what I consider to be the pretty disappointing responses to my prior post (especially the normally excellent TAS comboxes). The fact that so many people have reacted to things I wasn’t trying to say indicates that the communication failure is mine. So let me try to be clear about what I was actually trying to say.
When confronted with objections to an apparent scientific consensus, one valid approach is simply to assemble a wide variety of relevant scientists, ensure that the questions posed to them are technical questions within their scope of competence, and rely on their findings. This is the basic idea behind the UN IPCC, and the AGW reports of various national scientific academies. This has been my approach in the case of AGW, where I have always taken the technical findings of the IPCC as the starting point for any policy analysis on this topic.
George Will (or at least the view that was reasonably imputed to him by his interlocutors), questions the validity of the scientific consensus on AGW. His interlocutors, instead of just relying on the IPCC process, tried to engage the substance of George Will’s quasi-scientific objection. They responded by saying that he has not looked at a long enough trend line.
The primary point of my post was that while I agree that George Will’s (implicit) attempted falsification of AGW theory is not compelling, neither is the logic used by his interlocutors. Both logics share a common source of failure: looking for an underlying “trend” in the temperature record independent of physical causality. There is no magic “trend”, but instead a set of causal effects based on physical interactions that drive temperature. The scientific assertion made by the global climate science community is that we have built models that allow us to understand these effects with sufficient precision to make useful forward predictions. When evaluating this assertion, then, the relevant standard is not “Is the rate of warming slowing or accelerating?”, but rather “How accurately are our models predicting the rate of warming?”. That is, we should rationally care about deviation from prediction, not deviation from trend. This is why I described George Will’s (implicit) method for addressing the certainty of our scientific knowledge as “misguided”, which in addition to explicitly disagreeing with his conclusions, seems like a funny way of defending him.
The secondary point of the post was that a component of any well-structured prediction modeling process is to have model evaluation groups separate from the model-building teams that have different incentives and reporting structures, roughly analogous to a QA team for software development or fact-checkers at a magazine. One key task of such model evaluation teams is typically to escrow copies of code used to make predictions, log forward predictions made at time X for some outcome after time X, then run the code at the time of the predicted event with actual data entered for all inputs other than the asserted causal factor, and compare the resulting model output to actual outcomes. This is done across a range of predictions to create distributions of model error. While there have been some kludgey, one-off attempts to do something like this for the Hansen 1988 testimony, and a group has tried to look at single-year predictiveness of global climate models, there is nothing like a structured program in place to do this for climate models. Such approaches are always imperfect – and I tried to point out in the post some of the reasons that this would be especially problematic in the case of global climate models – but it would still provide a far better basis for the discussion of prediction adequacy than we have now.
What’s especially ironic abut a lot of the commentary on the post is that lots of people take assertions of uncertainty in climate forecasts as undercutting the case for emissions mitigation, so those on the Right argue for uncertainty, and those on the Left argue the opposite. In the sophisticated AGW debate, the economic justification for mitigation is seen as, conceptually, an insurance premium. If the expected warming takes place with expected effects, it is very difficult to justify the economic costs of mitigation, and therefore it is a hedge against much-worse-than-expected effects. Therefore, the greater the uncertainty in climate prediction, the stronger the case for mitigation – uncertainty is not our friend. So before you accuse me of intellectual dishonesty, recognize that in pointing out limitations in the current practice of climate model validation, I am actually arguing a point that cuts against my stated policy preference.