” … climate scientists cannot conduct controlled experiments on the Earth…. Instead they use … Global Climate Models, or GCMs–mathematical representations of the Earth that run on computers.”
“Processes operating at smaller scales [than 100 km], such as clouds, cannot be represented explicitly in the models but just instead be parameterized.”
“Parameterizations … [have] ad hoc constructions that are tuned so the model produces a realistic present-day climate. Consequently, parameterizations are one of the largest sources of uncertainly in GCMs.”
– Andrew Dessler and Edward Parson, The Science and Politics of Global Climate Change: A Guide to the Debate (Cambridge University Press, 2000), pp. 19–20.
The above explanation by climate scientist Andrew Dessler (co-author Parson is a lawyer/public policy specialist) opens the door to asking the question: are climate models ready for prime time?
Dessler goes on to say that “models can be tested by examining how well they reproduce the Earth’s actual climate.” And: “Considered in total, current models do a remarkable job of reproducing observations, lending confidence to their prediction” (p. 20).
Necessarily incomplete (“parameterized”) models with uncertain physical equations can be “right” for the wrong reasons, not only wrong for the right ones.
There is a burden of history to alarmist models. The curse of Malthusianism, beginning with (at least) the 1972 Club of Rome/MIT “limits of growth” computer model, is well documented. And today, the debate over the utility of climate models (“better than nothing” might not be if it conveys false knowledge) rages in the popular press and among wonks.
“The Economist, which usually just parrots the party line, includes a pretty good article explaining the basics of computer climate modeling, and especially their large limitations and defects,” noted Stephen Hayward at PowerLine (September 23). “Although the magazine tries hard not to sound openly skeptical, it is hard for any unbiased reader to finish this piece and think ‘the science is settled’.”
The article, “Predicting the Climate Future is Riddled with Uncertainty” (September 2019) includes these statements (reproduced by Hayward):
[Climate modeling] is a complicated process. A model’s code has to represent everything from the laws of thermodynamics to the intricacies of how air molecules interact with one another. Running it means performing quadrillions of mathematical operations a second—hence the need for supercomputers.
And using it to make predictions means doing this thousands of times, with slightly different inputs on each run, to get a sense of which outcomes are likely, which unlikely but possible, and which implausible in the extreme.
Even so, such models are crude. Millions of grid cells might sound a lot, but it means that an individual cell’s area, seen from above, is about 10,000 square kilometres, while an air or ocean cell may have a volume of as much as 100,000km3. Treating these enormous areas and volumes as points misses much detail.
Clouds, for instance, present a particular challenge to modellers. Depending on how they form and where, they can either warm or cool the climate. But a cloud is far smaller than even the smallest grid-cells, so its individual effect cannot be captured. The same is true of regional effects caused by things like topographic features or islands.
Building models is also made hard by lack of knowledge about the ways that carbon—the central atom in molecules of carbon dioxide and methane, the main heat-capturing greenhouse gases other than water vapour—moves through the environment.
Understanding Earth’s carbon cycles is crucial to understanding climate change. But much of that element’s movement is facilitated by living organisms, and these are even more difficult to understand than physical processes.
“True knowledge results in effective action,” is one of my favorite quotations. The better-than-nothing view of models misses this point.
Mototaka Nakamur (“Confessions of a Climate Scientist”)
The modeling quandary was also explained by Mototaka Nakamura in a Japanese-language booklet on “the sorry state of climate science.” An expert on climate modeling and the inputs driving the outputs, he is someone to listen to.
“These models completely lack some critically important climate processes and feedbacks,” he states, “and represent some other critically important climate processes and feedbacks in grossly distorted manners to the extent that makes these models totally useless for any meaningful climate prediction.”
Specific problems include unknowns regarding large and small-scale ocean dynamics; aerosol-generating clouds; ice-albedo (reflectivity) feedbacks; and water vapor causality.
As a result, model parameters are “tuned” (fudged) to to align with what is believed-to-be causal reality. “The models are ‘tuned’ by tinkering around with values of various parameters until the best compromise is obtained,” Nakamura admits.
I used to do it myself. It is a necessary and unavoidable procedure and not a problem so long as the user is aware of its ramifications and is honest about it. But it is a serious and fatal flaw if it is used for climate forecasting/prediction purposes.
The above analyses are in keeping with the views (really warnings) about climate modeling made two decades ago by Gerald North, certainly a distinguished climate scientist, during his consulting era with Enron Corp. Some of his quotations follow:
“We do not know much about modeling climate. It is as though we are modeling a human being. Models are in position at last to tell us the creature has two arms and two legs, but we are being asked to cure cancer.” [Gerald North (Texas A&M) to Rob Bradley (Enron), November 12, 1999]
“[Model results] could also be sociological: getting the socially acceptable answer.” [Gerald North (Texas A&M) to Rob Bradley (Enron), June 20, 1998]
“There is a good reason for a lack of consensus on the science. It is simply too early. The problem is difficult, and there are pitifully few ways to test climate models.” [Gerald North (Texas A&M) to Rob Bradley (Enron), July 13, 1998]
“One has to fill in what goes on between 5 km and the surface. The standard way is through atmospheric models. I cannot make a better excuse.” [Gerald North (Texas A&M) to Rob Bradley (Enron), October 2, 1998]
“The ocean lag effect can always be used to explain the ‘underwarming’….
The different models couple to the oceans differently. There is quite a bit of slack here (undetermined fudge factors). If a model is too sensitive, one can just couple in a little more ocean to make it agree with the record. This is why models with different sensitivities all seem to mock the record about equally well. (Modelers would be insulted by my explanation, but I think it is correct.)” [Gerald North (Texas A&M) to Rob Bradley (Enron), August 17, 1998]
Climate science is not settled as long as the physical processes behind climate change, not to mention climate models themselves, are in open debate. Models are no better than their assumptions. And models cannot be tested. Complexity cannot be modeled away where the results cannot be known to be right or wrong.
Assuming that models could reach a state of precision (a big assumption), does not rescue current modeling efforts. The burden of proof is on alarmist models, not actual climate.