A Free-Market Energy Blog

Climate Model Magic: Washington Post Today, Gerald North Yesterday (Part IV in a series)

By Robert Bradley Jr. -- April 13, 2010

[The other parts of this series on the activism of Texas A&M climatologists are here: Part I, Part II, and Part III]

“If the models are as flawed as critics say … you have to ask yourself, ‘How come they work?'”

– Gavin Schmidt [NASA], quoted in David Fahrenhold, “Scientists’ Use of Computer Models to Predict Climate Change is Under Attack,Washington Post, April 6, 2010.

“We do not know much about modeling climate. It is as though we are modeling a human being. Models are in position at last to tell us the creature has two arms and two legs, but we are being asked to cure cancer.”

     – Gerald North (Texas A&M) to Rob Bradley (Enron), November 12, 1999

A Washington Post piece last week, “Scientists’ use of computer models to predict climate change is under attack,” has brought attention to the importance of climate modeling in the current debate over climate sensitivity to greenhouse gases (GHGs). And not surprisingly, few mainstream IPCC scientists want to cast doubt on the current state of the art.

But what do open-minded climate scientists who are not formal modelers say behind closed doors? For part of this answer, I have collected these quotations from Dr. Gerald North of Texas A&M’s Department of Atmospheric Sciences and Department of Oceanography.

Prior to Climategate, at least, North was a straight shooter on the problems of climate models. [North’s Left turn to go arm-in-arm with Andrew Dessler with regard to Climategate is examined here , here, and here.] 

North’s estimate of climate sensitivity to greenhouse gases is about one-third below that of the model average that make up the IPCC projection (about 3ºC). As he said:

“I agree that the case for 2ºC warming [for a doubling of manmade greenhouse gas forcing in equilibrium] is pretty strong.”

– Gerald R. North to Rob Bradley, email communication, August 13, 2007.

North’s error range is 1/4th of a degree, so his warming estimate for a doubled GHG forcing is between 1.75ºC and 2.25ºC, the low end of which is outside of the IPCC range of 2ºC–4.5ºC. Yet Dr. North dare not advertise his dissent or what he believes is climate realism versus model-contrived climate and the resulting alarmism.

Climate models are only as good as what goes into them and our understanding of some of the physical processes that control key aspects of the climate (for instance, cloud behavior) and their response to human alterations of the atmospheric composition is less than ideal. Models cannot magically generate the real, operative microphysics of climate to inform us what will happen when climate forcings are altered. However, a preference for a particular outcome can quickly turn a garbage in-garbage out situation into alarmism in-alarmism out.

Models are not ready for prime time and may not be for many more years if not decades. But this inconvenient fact is downplayed by the scientists involved for two reasons. One is the massive government funding of climate modeling predicated on an assumed “climate problem.”  And two, there is a widespread Malthusian virus among natural scientists–a belief that nature is optimal, man’s influence is bad. (Just the opposite might be the case.) So the happy middle of the debate has been absent.

Alarming, but Flawed, Climate Models

One of the reasons I am not a climate alarmist is because of Dr. North. I believe North points us toward the elusive happy middle of the science debate between ultra-skepticism and alarmism.

Below are some quotations from North on climate models. Given that temperatures are not much different today than they were at the time of these (Enron-era) emails, and that temperatures in the last 10–13 years are at the very lower end of model projections, I would doubt that one could say that models have ‘solved’ climate in this period.

Jerry North on Climate Models

“[Model results] could also be sociological: getting the socially acceptable answer.”

– Gerald North (Texas A&M) to Rob Bradley (Enron), June 20, 1998

“There is a good reason for a lack of consensus on the science. It is simply too early. The problem is difficult, and there are pitifully few ways to test climate models.”

     – Gerald North (Texas A&M) to Rob Bradley (Enron), July 13, 1998

“One has to fill in what goes on between 5 km and the surface. The standard way is through atmospheric models. I cannot make a better excuse.”

     – Gerald North (Texas A&M) to Rob Bradley (Enron), October 2, 1998

“We do not know much about modeling climate. It is as though we are modeling a human being. Models are in position at last to tell us the creature has two arms and two legs, but we are being asked to cure cancer.”

     – Gerald North (Texas A&M) to Rob Bradley (Enron), November 12, 1999

“The ocean lag effect can always be used to explain the ‘underwarming’….

The different models couple to the oceans differently. There is quite a bit of slack here (undetermined fudge factors). If a model is too sensitive, one can just couple in a little more ocean to make it agree with the record. This is why models with different sensitivities all seem to mock the record about equally well. (Modelers would be insulted by my explanation, but I think it is correct.)”

    – Gerald North (Texas A&M) to Rob Bradley (Enron), August 17, 1998

 Appendix: Excerpts from WP Article

“The computer models used to predict climate change are far more sophisticated than the ones that forecast the weather, elections or sporting results. They are multilayered programs in which scientists try to replicate the physics behind things such as rainfall, ocean currents and the melting of sea ice. Then, they try to estimate how emissions from smokestacks and auto tailpipes might alter those patterns in the future, as the effects of warmer temperatures echo through these complex and interrelated systems….

To check these programs’ accuracy, scientists plug in data from previous years to see if the model’s predictions match what really happened.

But these models still have the same caveat as other computer-generated futures. They are man-made, so their results are shaped by human judgment.

This year, critics have harped on that fact, attacking models of climate change that have been used to illustrate what will happen if the United States and other countries do nothing to limit greenhouse gas emissions. Climate scientists have responded that their models are imperfect, but still provide invaluable glimpses of change to come.

They have found themselves trying to persuade the public — now surrounded by computerized predictions of the future — to believe in these.

If policymakers don’t heed the models, “you’re throwing away information. And if you throw away information, then you know less about the future than we actually do,” said Gavin Schmidt, a climate scientist at NASA’s Goddard Institute for Space Studies.

“You can say, ‘You know what, I don’t trust the climate models, so I’m going to walk into the middle of the road with a blindfold on,’ ” Schmidt said. “But you know what, that’s not smart.”

Climate scientists admit that some models overestimated how much the Earth would warm in the past decade. But they say this might just be natural variation in weather, not a disproof of their methods.

As computers have become faster and cheaper, models both simple and sophisticated have proliferated across government, business and sports, appearing to offer precise answers to questions that used to be rhetorical.

They also depend on the computers running them. To accurately depict how individual clouds form and disappear, for instance, the computers that model climate change would need to be a million times faster. For now, the effects of clouds have to be estimated….

There are more than a dozen such models running around the world: mega-computers whose job is creating a virtual Earth.

These usually combine a weather simulation with other programs that mimic effects of rain and sun on the land, currents in the ocean, and emissions of greenhouse gases. First, these models imagine all the factors interacting within a “grid box” — an imaginary cube of land, water and sky that might be 60 miles long and 60 miles wide….

The pattern is the point. It is man’s signature, a guide to what could happen in the real world. All the major climate models seem to show that greenhouse gases are causing warming, climate scientists say, although they don’t agree about how much. A 2007 United Nations report cited a range of estimates from 2 to 11.5 degrees over the next century.

“It’s an educated, scientifically based guess,” said Michael Winton, an oceanographer at the National Oceanic and Atmospheric Administration. “But it’s a guess nonetheless.”

But Warren Meyer, a mechanical and aerospace engineer by training who blogs at www.climate-skeptic.com, said that climate models are highly flawed. He said the scientists who build them don’t know enough about solar cycles, ocean temperatures and other things that can nudge the earth’s temperature up or down. He said that because models produce results that sound impressively exact, they can give off an air of infallibility.

But, Meyer said — if the model isn’t built correctly — its results can be both precise-sounding and wrong.

“The hubris that can be associated with a model is amazing, because suddenly you take this sketchy understanding of a process, and you embody it in a model,” and it appears more trustworthy, Meyer said. “It’s almost like money laundering.”

Last month, a Gallup poll provided the latest evidence of a public U-turn on climate change. Asked if the threat of global warming was “generally exaggerated,” 48 percent said yes. That was up 13 points from 2008, the highest level of skepticism since Gallup started asking the question in 1997.

But scientists say that, during this time, they have only become more certain that their models work.

Put in the conditions on Earth more than 20,000 years ago: they produce an Ice Age, NASA’s Schmidt said. Put in the conditions from 1991, when a volcanic eruption filled the earth’s atmosphere with a sun-shade of dust. The models produce cooling temperatures and shifts in wind patterns, Schmidt said, just like the real world did.

If the models are as flawed as critics say, Schmidt said, “You have to ask yourself, ‘How come they work?’

20 Comments


  1. Sean  

    Why are people reluctant to bring up the seasonal forecasts in the UK by the Met Office? Its the only place where global circlation models (GCM’s) are used to make verifyable seasonal predictions.

    They predicted warmer than normal for the last 3 years and, of course, the UK had several damp overcast summers and a winter that was coldest in 18 years in 2009-2009 and then the coldest in 30 years in 2009-2010.

    The Met Office announced in March they were giving up and will no longer issue seasonal forecasts.

    Reply

  2. Ferdinand E. Banks  

    Warren Meyer seems to have missed something. Some of the best minds in economics put together econometric forecasting models, and in the five or six years that I worked with models, and taught econometrics, I never encountered one of those ‘constructions’ that did what it was supposed to do. But don’t blame the model builders – blame the people who pay for their services.

    Reply

  3. Richard W. Fulmer  

    Ferdinand,
    Interesting comment. Why do you believe that the blame lies with the people who are paying for the modeling? Thanks!

    Reply

  4. Cal Beisner  

    In all the attempts to calculate and attribute individual forcings and feedbacks, we should be careful not to miss the forest for the trees. With no greenhouse effect global average surface temperature would be about 0 deg F; with it but without feedbacks, it would be about 140 deg F; but with it and feedbacks, it’s about 59 deg F. Ergo, net feedbacks eliminate about 58% of greenhouse warming. Doubled effective CO2 would raise global average surface temperature about 1.2 deg C before feedbacks. GHG’s are GHG’s, and feedbacks are feedbacks. There is no reason to think net feedbacks would be any different for the increment of GHG from what they are for background GHG. Ergo, we infer that they eliminate 58%, leaving us with climate sensitivity to doubled effective CO2 of about 0.5 deg C. The IPCC’s models don’t just exaggerate feedback; they get the sign wrong. 3 deg C is 1.2 + 150%; even 2 deg C is 1.2 + 67%. But the net feedbacks are negative, not positive. Neither the IPCC’s nor North’s estimate, therefore, is credible.

    Reply

  5. Charles Battig  

    The IPCC authors of two statements regarding computer modeling of climate within the IPCC 2007 and 2001 Assessment Reports would seem to have given reasoned readers the definitive reasons to treat the outputs of such scenario modeling as ascientific, politically driven dross:

    “More generally, the set of available models may share fundamental inadequacies, the effects of which can not be quantified.” 2007 IPCC report section 10.5.4

    “”In sum, a strategy must recognize what is possible. In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the lon-term prediction of future climate stated is not possible.” Third IPCC Assessment Report. Chapter 14.2.2.2 (page 774)

    The IPCC tippy toes around “prediction” by using the term”scenarios”; however, the press, politicians, and a lot of “scientists” find the distinction too constraining in the promulgation of scary stories and funding grant applications.

    A recent article by MIT Professor R. Lindzen on climate modeling comes close to reflecting the much earlier caution of meteorologist/mathematician R. Lorenz who published his insight in 1964. The serendipitous founder of chaos theory authored “Does a Climate Exist?” (TELLUS) and voiced the possibility that “climate” never really settles down to a steady state; yet climate modelers are funded to ruminate on global temperature changes of one degree, +/-.

    Reply

  6. Andrew  

    How come they work, Gavin? The answer is that if you try hard enough you can get a model to describe observed behavior, so the models appear to “work”. But they don’t really work, that is, have predictive power. When models are asked to do things they haven’t be taught to do right ahead of time, they fail.

    The “agreement” in the twentieth century is because modelers could tweak aerosols. The “agreement” with Pinatubo is something models can do because it’s easy-indeed, a model which is only half a sensitive to CO2, or even less, could still fit Pinatubo. The question is, can they match the future? So far, they can’t.

    Reply

  7. denis  

    Not only are these models dealing with climate ; they’re also simulating how the world’s economies will look in the distant future, and feeding that info back into their climate models. Now that’s REAL feedback!

    Reply

  8. Robert Bradley Jr.  

    This caught my eye from 2008 Annual Report of the Federal Reserve Bank of Dallas (p. 7):

    “The excesses in subprime lending in the United States were fed by an excessive amount of faith in technically sophisticated approaches to risk management and a misguided belief that mathematical models could price securitized assets, including securities based on mortgages, accurately. These valuation methodologies were so technical and mathematically sophisticated that their utter complexity lulled many people into a false sense of security.

    In the end, the complexity proved hopelessly inadequate as an all-encompassing measure of risk, despite its frequent advertisement as such. The risk models employed turned out to be merely formulaic descriptions of the past and created an illusion of precision. Such approaches could not and cannot replace the forward-looking judgment of a seasoned professional.”

    Reply

  9. Andrew  

    Rob-that’s a very interesting quote. In particular, this part reminds me of something:

    “The risk models employed turned out to be merely formulaic descriptions of the past and created an illusion of precision.”

    Tom Moriarty has been criticizing some recent work by RealClimate’s Stefan Rhamstorf for being exactly this kind of modeling. Rhamstorf believes that he can project future sea level based on a complex formula relating temperatures to sea level-this was published in PNAS, but it turns out that the model can describe the past, but as Tom points out, it is unlikely to be an explanatory model, because it produces nonsensical results:

    http://climatesanity.wordpress.com/2010/04/12/rahmstorf-2009-off-the-mark-again-part-4-parallel-universes/

    In the first post of his series, tell me if this sounds familiar, Tom says:

    “It is very important to understand that VR2009’s model (equation 2) is put forth as more than just a description of sea leve [sic] rise for the last 120 years. Rather it is an explanation for that sea level rise…But the difference between a formula that describes and a formula that explains is essential to understand, For example, if you were to drop a rope on the ground in a random fashion you could come up with some kind of formula for the elevation of the rope at each point along the first half of its length. Perhaps you would fit the elevation to an 10th order polynomial that mimics the pattern that the first half of the rope made. But that formula would have no power to tell you the pattern of the second half of the rope. Your formula would be a description of the pattern of the first half of the rope, but not an explanation. Also, if you lifted your rope and dropped it in a new pattern, your formula would now be useless to predict the elevation of the first half of the rope again.”

    If you read all of them, it gets really juicy, turns out one can feed all kinds of scenarios for temperature change into Rhamstorf’s model, and the projected sea level can end up making zero sense.

    Reply

  10. Major Mike  

    Iowa Method Proves Global Warming

    “Scientists’ use of computer models to predict climate change is under attack” (Washington Post)

    The following is an excerpt from the above Washington Post article:

    “But scientists say that, during this time, they have only become more certain that their models work.

    Put in the conditions on Earth more than 20,000 years ago: they produce an Ice Age, NASA’s Schmidt said. Put in the conditions from 1991, when a volcanic eruption filled the earth’s atmosphere with a sun-shade of dust. The models produce cooling temperatures and shifts in wind patterns, Schmidt said, just like the real world did.

    If the models are as flawed as critics say, Gavin Schmidt, a climate scientist at NASA’s Goddard Institute for Space Studies said, “You have to ask yourself, ‘How come they work?’ ”

    “The Iowa Method, that’s why they work,” would have answered my late friend, Senior Master Sergeant Robert Kenneth Clough, US Air Force. According to Sgt. Clough, the Iowa Method is a brilliant model of simplicity and accuracy: you start with the desired conclusion, and then figure how you got there.

    In short, the Iowa Method is infallible.

    Apparently that’s the way climate scientists like Gavin Schmidt feel about their computer models.

    Do you want an Ice Age? Take the one that occurred, and build a model that you declare replicates it. The model will always create the Ice Age that occurred, although it may not create the one that’s coming.

    Does the computer model of the last Ice Age begin with a precipitous drop in atmospheric carbon dioxide? It should, if the computer model predicting current rapid warming based on rising atmospheric carbon dioxide is also the Ice Age model.

    The same is true for a volcanic eruption in 1991. It may seemingly explain the subsequent cooling, but what explains the past decade of cooling without volcanic eruptions? Kevin Trenberth in a Climategate e-mail says it’s a “travesty” that our climate science (based on computer models) doesn’t explain the current lack of warming.

    A major segment of climate models is based on paleoclimate reconstructions by Michael Mann, Keith Briffa, Phillip Jones, et al. In essence their studies, heavily weighted to analyses of tree rings, show very little variation in global temperatures for the past thousand years, then a rapid warming in the last half century corresponding to increased atmospheric carbon dioxide.

    Inconveniently, their tree-ring reconstructions show cooling after 1960 instead of warming, so researchers conveniently discarded that portion of their research and replaced it by grafting on instrumental records to “hide the decline.”

    Doing this obviated the need to explain why the tree-ring proxies for temperatures were good until 1960, then not good thereafter.

    While ethically problematical, their approach satisfied the dictates of the Iowa Method: show that global warming did not begin until atmospheric carbon dioxide increased.

    Non-scientists like Albert Arnold Gore, Jr. grabbed hold of this “correlation” by first stating that current warming was “unprecedented” for the past thousand years, then by doubling this claim to the time of Jesus Christ.

    Again inconveniently, Über-climate alarmist Phil Jones recently admitted that the Medieval Warm Period (900 to 1400 AD) may have been warmer than present, and that global climate had cooled recently.

    However, Jones ended by proclaiming what can only be regarded as inconsequential, that January 2010 was the warmest on record, the record having begun in 1979. I doubt any reputable scientists would regard one month in a 31-year record as proof of anything, up to and including considering it to be just cause for reordering energy production and the economies of all nations.

    Although the Iowa Method would approve.

    Reply

  11. mij61  

    It’s very sad that our lives are being controlled by flawed computer software.

    Reply

  12. Paddy  

    I remember the post failure analysis following the financial markets collapse that was in large part due to the flawed risk assessment models used for the credit default swap securities. These economic models were significantly less complicated than climate models. The failure was caused by the inability to model, much less perceive, the unknown unknowns. Of course, the known unknowns could be modeled based upon assumptions.

    Climate models, regardless of sophistication and the the amount of computing power, cannot provide credible projections of climate for ten years or longer periods. The influence of known unknowns can be guessed, but correct outcomes are unlikely since the unknowns are not well understood.

    Modeling the unknown unknowns requires great imagination and incredible luck to even identify their possible existence. Successful modeling is remotely possible. Failure is inevitable.

    Why climate models have any credibility whatsoever can only be explained by Barnum’s axiom.

    Reply

  13. Ferdinand E. Banks  

    Mr Fulmer, the blame lies with people who pay for lousy models and lousy information because they are too lazy and too busy with trivia to spend a few hours a week thinking about the lousy information and models that are thrown at them. Yesterday I attended a seminar dealing with coal, and concentrating on ‘coal capture and storage’. The speakers were intelligent and articulate, and perhaps honest – though not so honest as my good self – and the audience were mainly engineers, executives and think-tankers, with some students brought in to add young color to the proceedings. I don’t remember anything so depressing, because if the audience had listened carefully to what the speakers said, and thought about it for a few minutes during a coffee break, they would have realized that it was nuthouse. Unfortunately I am in the habit of listening and thinking, and when I heard a good word put in for cap-and-trade I lost my cool, and moved into the judgemental mode.

    Reply

  14. Chris  

    The models are bunk. They have too many fudge factors that can be tweaked to match any past temperature profile. No model predicted 13 years of cooling. What will they say if we have 20 years of cooling?

    Finally, didn’t those wall st banks have all sorts of sophisticated models? Worked well for them.

    Reply

  15. Robert Bradley Jr.  

    I bring this series by Marco Evers, Olaf Stampf, and Gerald Traufetter in Spiegel Online International (April 1, 2010) to folks’ attention.

    http://www.spiegel.de/international/world/0,1518,686697,00.html

    Reply

  16. rbradley  

    Just read this at RealClimate:

    Jones:

    I’ve been told that IPCC is above national FOI Acts. One way to cover yourself and all those working in AR5 would be to delete all emails at the end of the process

    [Response: Bradley and Cook are entitled to their opinion about anybody’s papers. Barnett is overstating the degree of agreement in the CMIP3 20thC runs and is wrong about the nature of the tuning that occurs. Jones is correct in both statements – models are all wrong (but the question is whether they are useful), and FOI legislation does not cover the IPCC and is not a document retention law. – gavin]

    Comment #54 http://www.realclimate.org/?comments_popup=9931

    Reply

  17. Political Scientists: Gerald North and Andrew Dressler Double Down on Climate Alarmism | Watts Up With That?  

    […] North on Climate Models “We do not know much about modeling climate. It is as though we are modeling a human being. […]

    Reply

  18. James Hansen: Time to Go CO2 Negative! - Master Resource  

    […] much about modeling climate,” climate scientist Gerald North of Texas A&M University once explained to me. “It is as though we are modeling a human being. Models are in position at last to tell […]

    Reply

  19. James Hansen Misfires Again. | Climate Change Sanity  

    […] much about modeling climate,” climate scientist Gerald North of Texas A&M University once explained to me. “It is as though we are modeling a human being. Models are in position at last to tell us […]

    Reply

Leave a Reply