Simple Model Leaves Expensive Climate Models Cold
[Editor’s note: J. Scott Armstrong and Kesten C. Green, first time guest posters, are leading researchers in the field of forecasting. Scott Armstrong is a Professor at the Wharton School of the University of Pennsylvania and Kesten Green is a Senior Research Fellow at the Business and Economic Forecasting Unit at Monash University]
We have recently proposed a model that provides forecasts that are over seven times more accurate than forecasts from the procedures used by the United Nations Intergovernmental Panel on Climate Change (IPCC).
This important finding, which we report in an article titled “Validity of climate change forecasting for public policy decision making” in the latest issue of the International Journal of Forecasting, is the result of a collaboration between climate scientist Willie Soon of the Harvard-Smithsonian Center for Astrophysics, and ourselves.
In an earlier paper, we found that the IPCC’s approach to forecasting climate violated 72 principles of forecasting. To put this in context, would you put your children on a trans-Atlantic flight if you knew that the plane had failed engineering checks for 72 out of 127 relevant items on the checklist?
The IPCC violations of forecasting principles were partly due to their use of models that were too complex for the situation. Contrary to everyday thinking, complex models provide forecasts that are less accurate than forecasts from simple models when the situation is complex and uncertain.
Confident that a forecasting model that followed scientific forecasting principles would provide forecasts that were more accurate than those provided by the IPCC, we asked Willie Soon to join us in developing a model that was more consistent with forecasting principles and knowledge about climate.
The forecasting model we chose was the so-called “naïve” model. The naïve model assumes that things will remain the same. It is such a simple model that people are generally not aware of its power. In contrast to the IPCC’s central forecast that global mean temperatures will rise by 3?C over a century, our naïve model simply forecasts that temperatures next year and for each of 100 years into the future would remain the same as the last years’.
The naïve model approach is confusing to non-forecasters who are aware that temperatures have always varied. Moreover, much has been made of the observation that the temperature series that the IPCC use shows a broadly upward trend since 1850 and that this is coincident with increasing industrialization and associated increases in manmade carbon dioxide gas emissions.
In order to test the naïve model, we simulated making annual forecasts from one to 100 years in the future starting with 1850’s global average temperature as our forecast for the years 1851 to 1950. Then we repeated this process updating for each year up through 2007. This produced 10,750 annual average temperature forecasts for all horizons. It was the first time that the IPCC’s forecasting procedures had been subject to a large-scale test of the accuracy of the forecasts that they produce.
Over all the forecasts, the IPCC error was 7.7 times larger than the error from the naïve model.
While the superiority of the naïve model was modest for one- to ten-year-ahead forecasts (where the IPCC error was 1.5 times larger), its superiority was enormous for the 91- to 100-year-ahead forecasts, where the IPCC error was 12.6 times larger.
Is it proper to conduct validation tests? In many cases, such as the climate change situation, people claim that: “Things have changed! We cannot use the past to forecast.” While they may think that their situation is unique, there is no logic to this argument. The only way to forecast the future is by learning from the past. In fact, the warmers’ claims are also based on their analyses of the past.
Could one improve upon the naïve model? We believe so. The naïve model violates some principles. For example, it violates the principle that one should use as long a time series as possible, because it bases all forecasts on simply the global average temperature for the single year just prior to making the forecasts. It also fails to combine forecasts from different reasonable methods. We planned to start simple with this self-funded project and to then obtain funding to undertake a more ambitious forecasting effort to ensure that all principles were followed. This would no doubt improve accuracy. However, we were astonished at the accuracy of forecasts from the naïve model. For example, the mean absolute error for the 108 fifty-year ahead forecasts was only 0.24?C.
It is difficult to see any economic value to reducing such a small forecast error.
We concluded our paper with the following thoughts:
Global mean temperatures have been remarkably stable over policy-relevant horizons. The benchmark forecast is that the global mean temperature for each year for the rest of this century will be within 0.5°C of the 2008 figure.
There is little room for improving the accuracy of forecasts from our benchmark model. In fact, it is questionable whether practical benefits could be gained by obtaining perfect forecasts. While the Hadley temperature data…drifts upwards over the last century or so, the longer series…shows that such trends can occur naturally over long periods before reversing. Moreover, there is some concern that the upward trend observed over the last century and half might be at least in part an artifact of measurement errors rather than a genuine global warming (McKitrick & Michaels, 2007). Even if one accepts the Hadley data as a fair representation of temperature history, our analysis shows that errors from the naïve model would have been so small that decision makers who had assumed that temperatures would not change would have had no reason for regret.