A Free-Market Energy Blog

Countering Sen. Kerry’s Catastrophic Climate Claims (Part 2)

By Kenneth P. Green -- December 24, 2009

Editor note: On November 10, 2009, Mr. Green testifedbefore the Senate Committee on Finance about global warming. During the course of his testimony, an obviously agitated Senator John Kerry (D-Mass.) challenged Ken on different aspects of the climate debate. His responses are printed here. [Part I of this series ran yesterday.]

1. Peer-Reviewed Publishing Revisited

Kerry seemed to think it somehow damning that I do not choose to publish in the peer-reviewed climate literature. First—as I pointed out when I introduced myself—while I am an environmental scientist by training, I have chosen to work on policy analysis, which I believe is as important as, or more important than, the science.

However, I would challenge his very premise, which is that peer review is a meaningful indicator of trustworthiness. Plenty of research suggests that peer review is deeply flawed, biased in favor of both extreme and “positive” claims, resistant to nonconfirmation studies, and highly incestuous, because review committees regularly screen out divergent viewpoints and consist of peers who coauthor work with each other. While most research on problems with peer review involves medical literature, there is every reason to believe the same problems plague climate research.

As Drummond Rennie, M.D., deputy editor (West) of the Journal of the American Medical Association writes, “There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.” Peer review determines where rather than whether a paper should be published, Rennie says. However, from time to time, “shoddy science” ends up even in the most prestigious journals.

Examining peer review in the context of genetically modified food, Robert Horton, editor of the medical Journal Lancet has observed that “the mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability—not the validity—of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.”

For additional information on the limitations of peer review, I point you to the following papers:

John P. A. Ioannidis, “Why Most Published Research Findings Are False. After examining the various elements that can lead to studies being published in peer-reviewed literature despite failing to accurately represent reality, the author concludes that “most research findings are false for most research designs and for most fields.”

Neal S. Young, John P. A. Ioannidis, and Omar Al-Ubaydli examine current publication practices in an economic framework, and conclude: “The current system of publication in biomedical research provides a distorted view of the reality of scientific data that are generated in the laboratory and clinic. This system can be studied by applying principles from the field of economics. The “winner’s curse,” a more general statement of publication bias, suggests that the small proportion of results chosen for publication are unrepresentative of scientists’ repeated samplings of the real world. The self-correcting mechanism in science is retarded by the extreme imbalance between the abundance of supply (the output of basic science laboratories and clinical investigations) and the increasingly limited venues for publication (journals with sufficiently high impact).” As an example, they point out that “an empirical evaluation of the 49 most-cited papers on the effectiveness of medical interventions, published in highly visible journals in 1990–2004, showed that a quarter of the randomised trials and five of six non-randomised studies had already been contradicted or found to have been exaggerated by 2005. The delay between the reporting of an initial positive study and subsequent publication of concurrently performed but negative results is measured in years.”

Jeffrey D. Scargle has studied what is called the “file-drawer” problem in scientific research. That is, if a laboratory runs one hundred experiments that obtain a negative result and only one that reaches a positive result (which can happen by chance), the laboratory can simply publish the one study and relegate the others to the file-drawer or trash can. The authors conclude: “Publication bias arises whenever the probability that a study is published depends on the statistical significance of its results. This bias, often called the file-drawer effect because the unpublished results are imagined to be tucked away in researchers’ file cabinets, is a potentially severe impediment to combining the statistical results of studies collected from the literature. With almost any reasonable quantitative model for publication bias, only a small number of studies lost in the file drawer will produce a significant bias.”

In a study of articles from Nature and the British Medical Journal (BMJ), Emili Garcia-Berthou and Carles Alcaraz looked for erroneous statistics. They found that “at least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude.”

In a column by David F. Horrobin, the long-time critic of peer review observes that “far from filtering out junk science, peer review may be blocking the flow of innovation and corrupting public support of science.”

For a specific example of the incest problem in climate research, see the report to Congress prepared by Edward J. Wegman, David W. Scott, and Yasmin H. Said. In this report, solicited by Congress itself, leading statistician Edward J. Wegman and colleagues were asked to study claims disputing the iconic “Hockey Stick” chart famously produced by Michael Mann at the University of Virginia. The “Hockey Stick” is a “reconstruction” of global average temperatures stretching far into the past (over 1,000 years) that shows a relatively smooth decline in temperatures over that time until about 1900, at which time temperatures appear to increase sharply. Not only did the Wegman panel uphold criticisms of that chart, it found improprieties in the review process: “In particular, if there is a tight relationship among the authors and there are [sic] not a large number of individuals engaged in a particular topic area, then one may suspect that the peer review process does not fully vet papers before they are published. Indeed, a common practice among associate editors for scholarly journals is to look in the list of references for a submitted paper to see who else is writing in a given area and thus who might legitimately be called on to provide knowledgeable peer review. Of course, if a given discipline area is small and the authors in the area are tightly coupled, then this process is likely to turn up very sympathetic referees. These referees may have coauthored other papers with a given author. They may believe they know that author’s other writings well enough that errors can continue to propagate and indeed be reinforced.”

Wegman, Scott, and Said then set to examine whether or not such close relationships existed in the paleoclimate community, and they note that “in our further exploration of the social network of authorships in temperature reconstruction, we found that at least 43 authors have direct ties to Dr. Mann by virtue of coauthored papers with him. Our findings from this analysis suggest that authors in the area of paleoclimate studies are closely connected and thus ‘independent studies’ may not be as independent as they might appear on the surface.”

Such incestuous relationships almost certainly also exist in other subcommunities of climate research, including predictive modeling, climate sensitivity estimation, greenhouse gas residence times, dendro-climatology, and more.

2. Climategate: Kerry’s Shoe on the Other Foot?

The existence of such “tribalism” in climate science has recently been thrown into stark relief by the public release of a vast quantity of files and e-mails that were either taken from the computer system of the University of East Anglia by hackers, or posted to the Internet by a whistle-blower. The University of East Anglia is home to the Climatic Research Unit, until recently considered one of the most important climate research institutions in the world, and is a supplier of information to the United Nations IPCC.

More than one thousand e-mails and two thousand other documents were posted to the Internet; it will likely take months to fully explore the archives, and verifying the authenticity of individual documents may be impossible (WSJ). But from early inspection, there are strong suggestions that the researchers at the Climatic Research Unit, along with their colleagues elsewhere, actively sought to prevent contrary findings from being published in the peer-reviewed literature.

Here are some examples:

From: Tom Wigley, January 20, 2005. If you think that [James E.] Saiers is in the greenhouse skeptics camp, then, if we can find documentary evidence of this, we could go through official AGU [American Geophysical Union] channels to get him ousted. [Author’s note: Saiers, the editor of Geophysical Research Letters, was later ousted.]

From: Michael E. Mann, March 11, 2003. This was the danger of always criticising the skeptics for not publishing in the “peer-reviewed literature.” Obviously, they found a solution to that—take over a journal! So what do we do about this? I think we have to stop considering “Climate Research” as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal. We would also need to consider what we tell or request of our more reasonable colleagues who currently sit on the editorial board.

From: Edward Cook, June 4, 2003. I got a paper to review (submitted to the Journal of Agricultural, Biological, and Environmental Sciences), written by a Korean guy and someone from Berkeley, that claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc. They use your Tornetrask recon as the main whipping boy. . . . If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically. . . . I am really sorry but I have to nag about that review—Confidentially I now need a hard and if required extensive case for rejecting—to support Dave Stahle’s and really as soon as you can. Please.

From: Tom Wigley, April 24, 2003. Mike’s idea to get editorial board members to resign will probably not work—must get rid of [Hans] von Storch too, otherwise holes will eventually fill up with people like [David R.] Legates, [Robert C.] Balling, [Richard S.] Lindzen, [Patrick J.] Michaels, [S. Fred] Singer, etc. I have heard that the publishers are not happy with von Storch, so the above approach might remove that hurdle too.

From: Phil Jones, July 8, 2004. I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow—even if we have to redefine what the peer-review literature is!

3. Adaptation Policy is Not ‘Do Nothing’

Finally, Sen. Kerry implied that I had not said what I would do about the risk of climate change. This is incorrect. In my response to him, and other members of the committee, I offered to provide my latest paper on adaptation to the committee. The summary is as follows: “The Earth’s climate is prone to sharp changes over fairly short periods of time. Plans that focus simply on stopping climate change are unlikely to succeed; fluctuations in the Earth’s climate predate humanity. Rather than try to make the climate static, policymakers should focus on implementing resilience strategies to enable adaptation to a dynamic, changing climate. Resilience strategies can be successful if we eliminate current risk subsidies and privatize infrastructure.”

A PDF of the original article this material was published in can be found here.


  1. Jon Boone  

    Outstanding piece, beautifully written and argued. Science is foremost a methodology for inquiry and validation, insisting upon skeptical, disinterested experiment, evaluating prediction in ways that require replication by others who are themselves disinterested. Because financial reward and career opportunity creates such bias in the halls of those working as “scientists,” disinterest, the bedrock of scientific inquiry, is in very short supply. Peer review in today’s institutionalized systems of grants and contracts has a distinct ecclesiastical character, with Scribes and Pharisees sanctifying doctrine. And the Devil takes the hindmost….

    Happy Holidays!


  2. Robert Bradley Jr.  

    I think we also need to keep in mind the ‘smartest guy in the room’ problem–and the ‘silver bullet’ approach to energy policy where it is believed that genius can transform the energy market according to political desire.

    I am reminded about Samuel Insull’s recollection of an exchange he had with Thomas Edison:

    “I asked [Edison] once whether he believed in genius. He said not he did not, ‘It is all bunk.’ I said, ‘What do you believe in?’ He replied, ‘I believe in the experience of a man who knows a few thousand things that won’t work.'”

    I wish DOE secretary Paul Chu, science advisor Jophn Holdren and the rest knew what didn’t work.


  3. kgreen  

    Thanks for the kind remark, Jon!


  4. Marlo Lewis  

    Yes, very valuable column, Ken.

    A glaring example of how easily peer review becomes cronyism and old boy network is PNAS, the flagship publication of the National Academy. Academy members get to “line up” their own referees for papers they submit (Science magazine, 18 September 2009, http://www.sciencemag.org/cgi/reprint/325/5947/1486-b.pdf). Until recently, they could also extend this “privilege” to papers submitted by friends who were not yet Academy members.

    At a February 25, 2009, House Ways and Means Committee hearing, James Hansen declined to address John Christy’s critique of Hansen’s climate sensitivity assumptions. Hansen told the Committee to refer the issue to the National Academy and accept its verdict as authoritative. Hansen is a National Academy member, Christy is not. So what Hansen was really proposing was to let the old boy network to which he belongs settle the matter. A blog post I did on this incident in light of Science magazine’s column about peer review cronyism at PNAS is available here: http://www.openmarket.org/2009/09/23/pnas-peer-review-or-old-boy-network/


Leave a Reply