A Free-Market Energy Blog

On Scientific Method: Comment on Hawkins

By Jon Boone -- February 24, 2016

“’Unaccountable statistics’ [are] statistical goulash that sounds tangy and sophisticated but is actually bereft of substance, and used to make predictions that are almost never accounted for. Any number of ‘scientific’ renewable energy reports, from NREL to Stanford to MIT, are of this kind.”

Kent Hawkins’s post yesterday, “Science, Advocacy, and Public Policy,” defends the scientific method against both political correctness and the misuses of the method, often by people who claim to be scientists. This is a major issue in the current energy and climate debate where exaggeration and bias go hand-in-hand. I wish to add support to Hawkins’s theses in light of some of science’s nuanced complexity.

Scientific Inquiry

Here’s how a few mainly twentieth century scientists defined the purpose of scientific inquiry:

“Science is the disinterested search for the objective truth about the material world.”–Richard Dawkins;

“The less one knows about the universe, the easier it is to explain.”–Leon Brunschvicg;

“Theories crumble, but good observations never fade.”—Harlow Shapley.

For those interested in a brief but more expansive discourse on what scientists are obligated to do as they pursue any inquiry using the tools of the scientific method, read Richard Feynman’s “Cargo Cult Science.” Matt Ridley also has a good essay on the subject, “What the Climate Wars Did to Science.”

These thinkers emphasize several things:

(1) It’s not about information per se; but rather the quality of information.

(2) It’s not a thing but a methodological process in search of finding the better idea.

(3) All conclusions reached using the scientific method are always provisional.

(4) Any scientific inquiry should be engaged in the spirit of skepticism and disinterest. Any perceived bias, including the bugbear of self-interested activism and appeals to authority, should be vetted and accounted for before valid conclusions are reached (any magician, for example, knows how easy it is to fool especially intelligent people who think they are privy to a particular truth).

(5) Any scientific conclusion about material reality should have accountability in the here and now: it should be falsifiable, if it is false, and it should be repeatable by others (as Kent emphasizes).

Scientific inquiry, by its nature, is often a dialog between the prescriptive and the descriptive. Although policy recommendations made by scientists are prescriptive, typically devoid of skepticism and disinterest, the real prescriptive nature of science resides in the foundational tool of the method: the hypothesis, which is a provisional conditional, that is, an “if/then” proposition in need of evidentiary support in the real world.

If the prescribed alignment of the “if” factors are true, leading to an accurate prediction about the nature of the “then,” the hypothesis can be considered a functional prescription for the nature of reality in this particular narrow forum. If related hypotheses are validated thus, especially across a range of disciplines, the hypotheses collectively can be raised to the level of a theory–a general conceptual tool for analyzing, predicting, explaining behaviors in the natural world. Theories have a welter of complementary empirical support across many fields of study.

So prescription, as a means of making informed but accountable predictions, is fundamental to how science works well; prescription as a means of selling soap is problematic, but not necessarily “wrong” because it still may point to a particular truth–which can be accountably validated. Or not.

Facts are Contextual

All facts are contextual. What works in one area from one perspective may not work or be true in another. Points of view–and describing them accurately–are lynchpins of good scientific inquiry.

Examples of this are legion. They are pivotal when accounting for the differences between relativity and quantum theory, for example, and for understanding why Newton’s gravitational mechanics must defer ultimately to those of Einstein’s General Theory. Moreover, definitions about terms and processes should be consistent and accounted for as they are loosed on a particular inquiry. In general, this is known as the “operating definition” issue.

Definitions that slip and slide across an explanatory landscape–that mean one thing but then are used without explanatory context to mean something different or even opposite are one of the first clues that a “scientific” claim may be bogus. Witness the MIT report on the future of solar….

Statistics & Cargo Cult Scientists

I want to end by briefly discussing one of my major concerns about the way cargo cult scientists, particularly those bellying up to the trough of academic or government institutions, with their trove of public dollars to dispense, make use of statistics.

Statistics are now so pervasively part of our culture, technology, and language that most of us take the term for granted, so much so that we don’t even examine what we mean by statistical ideas anymore. How many know the difference among mean, median, and average, for example? Or realize that our electronics, including television, computers, medical diagnostic machines, are based upon accountable statistical projections embedded in quantum mechanical algorithms.

Or understand that Boyle’s Law, that bedrock equation governing the relationship between pressure and volume, is rooted in a statistical appreciation of how individual atoms, by themselves behaving unpredictably, are herded by group behavior into breathtakingly predictable statistical flocks sufficient for us to build mighty civilizations upon them.

Statistical methods for assessing the physical conditions sufficient for predicting short-term weather have also been extremely useful in accountable ways. Beyond this, there are the accountable social statistics behind insurance actuarials, polling projections, and Madison Avenue ads. Although predictions in this arena leave some greater margin for error, their utility, for good or ill, have made fortunes and provided for social and economic stability.

On the other hand, there has also been a lot of flimflam associated with statistical usage, as evidenced by the Mark Twain quote, “There are three kinds of lies: lies, damned lies and statistics.” I’ll never forget Darrell Huff’s marvelous tract, “How to Lie with Statistics,” with its shorthanded introduction to some of the ways the ignorant are gulled by the crafty and craven.

However, I’d like to add another category that I think better conveys my concern: “Unaccountable statistics”–statistical goulash that sounds tangy and sophisticated but is actually bereft of substance, and used to make predictions that are almost never accounted for. Any number of “scientific” renewable energy reports, from NREL to Stanford to MIT, are of this kind.

But there’s a raft of others, including most epidemiological analyses, nutritional studies, climate change modeling (don’t you love those 100-year out unaccountable predictions?), and, of course, the investment wizards of Wall Street. Perhaps someone one day will write a “How to Lay with Statistics” as a means of exposing the trickeration of statistical unaccountability (with no apologies to Key Lay).

Occam’s Razor

I can’t end this discussion without at least some mention of Occam’s Razor, which is also known as the principle of parsimony. Among competing hypotheses, scientists prefer the one with the fewest assumptions that explain all the known facts. Other, more complicated solutions may ultimately prove correct, but—in the absence of certainty—the fewer assumptions made, the better.

Elegance is another descriptor of this idea. All this can become complicated and is mostly conditional. However, the preference for simplicity depends upon falsifiability. For each accepted explanation, there is a wide range of possible and more complex alternatives, because one can puff up failing explanations with ad hoc hypotheses as a hedge to keep them from being falsified; therefore, simpler propositions are preferable to more complex ones because they are better testable and falsifiable.

The fly in the ointment of explanation, which is really the point of the scientific method (making things plain), is that, as context is enlarged and expanded, newer “facts” often buzz up and over the boundaries of conventional explanations. And, of course, the new evidence must be accounted for.

———-

A lifelong environmentalist and resident of Maryland, Jon Boone helped found the North American Bluebird Society, became an associate editor of Maryland Birdlife, and is a consultant with the Roger Tory Peterson Institute in Jamestown, New York. He paints in transparent watercolors and has written extensively about energy policy and the Dutch master, Johannes Vermeer.

 

9 Comments


  1. Ed Reid  

    Reading Kent Hawkins’ excellent piece and Jon Boone’s extensive and excellent comment thereon, in the context of climate science, makes me cringe.

    My concerns begin with the conflation of terms, such as: weather and climate; global warming/cooling and climate change; climate change, anthropogenic climate change and catastrophic anthropogenic climate change; data and estimates; predictions, projections and potential scenarios. etc. My concerns grow with the conflation of other terms, such as: climate denier; climate change denier; anthropogenic climate change denier; etc. I cringe at other terms, such as: science denier; anti-science; climate zombies, etc., especially when used by climate scientists.

    The most serious of my concerns relate to the conflation of the terms data and estimates, in their various forms. Data are readings taken from instruments. Data taken from properly selected, calibrated, sited, installed and maintained instruments are good data. Data taken from instruments which do not meet these criteria are bad data. Data not collected because of instrument failure, or the failure to install an instrument, are missing data. Nothing can change the character of the data; they are forever good, bad or missing. Data are immutable.

    Data which are “adjusted”, as is common practice with temperature data in climate science, are no longer data. Rather, they are estimates of what the data might have been, had they been collected timely from properly selected, calibrated, sited, installed and maintained instruments. Therefore, the data collected from all of the instruments for any given period, once “adjusted”, are no longer a “data set”, but rather a set of estimated temperatures. These sets of estimated temperatures are then compared with sets of estimated temperatures from prior periods to produce estimated temperature anomalies between the periods being compared. In some cases, the temperature estimates produced from the “adjusted” data are further corrupted by the process of “infilling” temperature estimates from non-existent or non-functional instruments. The three principal producers of the global average temperature anomalies also select data from among the available data, rather than using the entire data set in their analyses.

    The combination of data selection, “infilling” and the use of differing data “adjustment” methods results in differences between the calculated changes in the anomalies (changes in temperature), from year to year. For example, the estimated temperature anomaly (temperature) change between 2014, the previous “warmest year in the instrumental temperature record” and 2015, the current “warmest year in the instrumental temperature record” was reported variously as 0.16 +/- 0.09C (NCEI), 0.13 +/- 0.10C (NASA GISS) and 0.18 +/- 0.10C (HadCRUT). Obviously, the change in the global average near-surface temperature between 2014 and 2015 could not have been precisely 0.16C and 0.13C and 0.18C, though it might have been one of those values, or at least within the range of those values. At least two of those values, perhaps all three, are either inaccurately precise of precisely inaccurate, or both.

    In sciences such as climate, instrument function and accuracy are of extreme importance, since it is not possible to rerun the experiment in the event of an instrument or data acquisition failure, as each day is unique. This makes the apparent cavalier attitude of the “consensed” climate science community to data quality and data integrity very difficult to accept. If data are important, then installing instruments to collect the data and maintaining them to assure continuing data quality are essential. This should certainly be the case if the data are being collected and analyzed to support policy decisions involving the investment of $100+ trillion and the future living conditions of 7+ billion human beings.

    Reply

  2. Kent Hawkins  

    Jon’s commentary here, providing greater nuance and depth, is a very informative extension of the subject. Some emphasis on this, that is, more acknowledgement of the necessary nuance than I showed, did not make the final cut of my post, largely in the interests of brevity and clarity. Thanks Jon for providing this. I especially like the clarification of the concept of Occam’s razor, which is often incorrectly cited.

    Reply

  3. Jon Boone  

    Thanks, Ed. You and others might want to read Richard Lindzen’s masterful speech given last summer, reprinted most recently on Energy Matters: http://euanmearns.com/tag/richard-lindzen. He touches on many of the points that you, I and Kent have made, as they focus on Climate “Science.” I especially enjoyed his distillation of the “science on demand” crowd. He also points out, as I did a decade ago, the complicity in this climate catastrophism narrative of mainly fossil fuel companies that do the world much good by delivering, among other things, fuel for firm capacity electricity generation, what I’ve called Big Energy corporations. They’re the ones with Big Wind in their portfolios, to be used as PR to gull the dimwitted and, most importantly, as significant tax shelterers. They know that Big Wind generally increases the need for fossil fired generation, since wind machines cannot produce modern power and therefore can never be an energy competitor.

    I saw first hand how cargo cult science slithered its way into electricity regulatory agencies in the form of for hire punditry during hearings on renewables power plant and smart meter applications, where engineers, economists, lawyers, biologists, physicians prostituted their field of knowledge by delivering opinions and recommendations that fit precisely into what their corporate clients wanted to hear–and paid them to say, intentionally ignoring a range of evidence available from that field that would have rather thoroughly subverted their testimony. It was fabulously, transcendently, disgusting.

    Reply

  4. Ray  

    Now the NASA has decided that the sea level data is wrong and the data needs to be corrected. Naturally the corrected data shows that the oceans are rising. We’re doomed.

    Reply

  5. John Droz  

    I thought this article, and yesterday’s piece on Science, Advocacy and Public Policy, were very well done.

    On the same topic I’d strongly recommend that that anyone interested in this topic read Dr. Robert Lackey’s superior piece on Normative Science (“tinyurl.com/jn4cvpr”). His point is that Science is about revealing the technical truths of our universe, not advocacy.

    Put another way, true Science is not in the value business.

    But, of course, this is exactly the problem (in academia and subsequently) that science professors and scientists have become ardent advocates, based on their own personal beliefs.

    Once they get off the road and into that ditch, numerous complications arise — like exactly who is responsible for determining what values should be advocated? Etc, etc.

    IMO this is one of the most problematic issues of our time.

    Reply

Leave a Reply