A free-market energy blog
Random header image... Refresh for more!

Is DOE/Lawrence Berkeley Lab’s Windpower Impacts Study ‘Junk Science’? (Albert R. Wilson challenges the ‘experts’)

[Editor’s note: With the author's permission, MasterResource reprints a probing analysis of a recent study by the Department of Energy's Lawrence Berkeley National Laboratory, The Impact of Wind Power Projects on Residential Property Values in the United States. Albert Wilson critically examines a genre of analysis used by wind proponents, including government bodies and environmentalists, that produces a desired result. Comments are invited on this paper as well as on other examples of where methodological tricks are used to justify wind power and other politically dependent energy technologies. (Mr. Wilson's Bio is at the end of the article.)]


by Albert R. Wilson

I recently examined a document published by the Department of Energy’s Lawrence Berkeley National Laboratory titled “The Impact of Wind Power Projects on Residential Property Values in the United States: A Multi- Site Hedonic Analysis” (hereafter “Report”). I express no opinion concerning the impact of wind power projects on residential property values and instead focus on the underlying methods used in the development of the Report, and the resulting serious questions concerning the credibility of the results.

As stated in the title, the primary bases for the conclusions drawn in the Report are hedonic analyses of residential real estate sales data. A hedonic analysis in turn is based on the assumption that the coefficients of certain explanatory variables in a regression represent accurately the marginal contribution of those variables to the sale price of a property.

While I have other issues with the Report (and again reiterate that I have no opinion on the influence of wind farms on residential sales prices), the concerns I have addressed here lead to the conclusion that the Report should not be given serious consideration for any policy purpose. The underlying analytical methods cannot be shown to be reliable or accurate.

The reasons for the conclusion may be summarized as:

1) Lack of access to the underlying data prevents the independent validation of the data, replication of the analysis, testing of alternative analyses, or testing of the conclusions against the real market.

2) The peer review process used for both the literature and the Report can only determine the acceptability of the papers for publication. It cannot reveal the validity, accuracy or reliability of the work behind the papers.

3) Given the peer review actually conducted, the fact that no published and recognized standards for the development of an accurate and reliable regression on sales price were used render the Report of highly uncertain value for any purpose.

4) The exclusive use of a test of statistical significance only indicates that the coefficients for Distance and View variables are not conclusive. What we do not know is what those coefficients actually represent. Only tests of economic significance would provide an answer, and none has been conducted.

5) Low explanatory power: 13% less than an acceptable minimum for an accurate regression on sales price.

The technical analysis underlying this conclusion follows:


A regression is a statistical process that attempts to quantify a hypothetical relationship between certain factors (explanatory variables) and the value of an outcome (dependent variable). The explanatory variables are related to the dependent variable through a mathematical formula generally referred to as a regression model. In real estate the explanatory variables are usually such things as size (square feet), number of bedrooms and bathrooms, garage space, presence of basement, location, and the like. The dependent variable is sales price. In the Report the authors are basing their analysis primarily on a set of regression models with the inclusion of variables that attempt to estimate the possible impact of distance from and view of turbines.

The mathematics of regression are executed through a computer program that assigns numeric values to the multipliers (coefficients) of the explanatory variables in such a way that when the estimates of the sales prices computed by the regression model are compared to the actual sales prices of the properties upon which the regression is based, the difference is at a mathematical minimum based on some measure (e.g. R2 or R-squared, the coefficient of determination). This process is accomplish through the computer program by continually changing the coefficients of the explanatory variables, recalculating all of the estimated sales prices using the new coefficients, comparing the estimated to the actual sales prices and repeating the process until the minimum difference given the data and the regression model is achieved.

Using the hedonic analysts’ favorite measure of R2, the usual hedonic interpretation is that if R2 = 1 then the regression model explains all of the differences between the estimated and actual sales prices. If R2 = 0 then none of the differences are explained and the regression model is a failure. If the underlying regression is not explanatory of the actual data then the dependent hedonic analysis cannot be explanatory.

There are literally thousands of possible real estate regression models. The literature in the hedonic field generally exhibits little agreement on a model’s mathematical form or the explanatory variables that should be included.1 Absent published and recognized standards on the validation of data, model development and testing, and calibration of the model against the real world market, a regression may be nothing more than a rubber ruler that can be stretched to provide a desired result.2


However, a well-developed and tested set of standards do exist. Those standards are published and maintained by the International Association of Assessing Officers (IAAO) and are explicitly for the accurate and reliable estimation of sales prices using regressions, not simply for appraisal purposes as some allege.3 These standards are employed many hundreds of times a day and are continually tested against the market.

For comparison purposes it should be noted that the usual hedonic regression model has an R2 from 10% to more than 60% less than an acceptable regression under IAAO standards (IAAO R2 better than 0.904 versus the best R2 cited in the Report of 0.78–13% less–for example). No satisfactory scientific explanation of why a regression with a smaller R2 will provide more accurate and reliable hedonic results has been provided.

There is no evidence whatever that the Report employed any standards. While the authors refer to the literature as support for their method this is little comfort as there is no evidence that any recognized standards were applied to the work reported in that literature. Further, the literature contains a significant number of papers illustrating some of the problems associated with hedonic studies ranging from an absence of proper validation of the underlying data, to models deliberately manipulated to magnify the desired impact, to improper use of indicator variables, to a failure to check the results of the models against the market to determine if the proclaimed results actually represent market behavior.5

A common problem with the lack of adherence to standards is that the apparent magnitude and statistical significance of the coefficients of interest may be increased by simply not including important explanatory variables in the regression, generally known as the “omitted variable” problem.6 This omission may be the result of a lack of understanding of residential sales price behavior or from other considerations but the result is the same, skewed coefficient values. There is strong evidence of an omitted variable issue in the Report.

Another method of increasing the apparent importance of a coefficient is to aggregate data into increasingly more expansive variable definitions. This procedure was used in the Report and is acknowledged by its authors. “The Base Model described by equation (1) has variables that are pooled, and the coefficients for these variables therefore represent the average across all study areas (after accounting for area fixed effects). An alternative (and arguably superior) approach would be to estimate coefficients at the level of each study area, thereby allowing coefficient values to vary among study areas.”7

The consequence of this aggregation is to distort the quantitative meaning of the coefficients. Possible situations in the Report include sales prices in areas of declining population and therefore decreasing demand–a majority of the areas examined–are not directly comparable to sales prices in areas of increasing population and therefore increasing demand, but these markets were combined in the Report. Also in the Report is the aggregation of markets such as those in Washington–used as the base for comparison to all other areas by the Report–where the urban market of Kennewick was aggregated with the rural market of Milton-Freewater 42 miles distant. The failure to recognize and account for the need for homogeneity of markets is a common failing of hedonics.

One of the major issues concerning the hedonic approach on a nationwide basis in ignoring local market homogeneity is addressed by the 2009 Coldwell Banker Home Price Comparison Index.8 It makes the point that local markets are critical. For example a house in Grayling, Michigan sells for $122,675 while in La Jolla, California the same house sells for $2,125,000. Creating an average sales price representing houses from nine states and at least 20 different markets–as the Report did–is a gross oversimplification that cannot provide for the specificity required to answer a micro-question such as an influence on sales price from a highly localized condition–distance to or view of a wind energy project.

This problem becomes critical when it is recognized that less than 10% of the sales transactions in the Report had any view of turbines, and that only 2.1% had a view rated greater than minor. The study is dominated by transactions where no influence is reasonably likely. The argument that the report is “data rich” may in fact be an overstatement of the situation because of this issue.

It is worth noting that IAAO standards discourage the use of regression for the analysis of the impact of a proximate condition on value precisely because of the small number of potentially influenced sales available for analysis by regression. Instead the use of the classic three approaches to value (sales comparison, income and cost) is encouraged as more reliable under these circumstances.9

A major issue pointed to in the literature is the influence of errors in the data. A recent article reported that, using an IAAO certified regression, as few as 15 erroneous sales skewed the estimated sales prices by at least $500 for all but 43 of the 20,000 sales estimated.10 In another instance a single error in the age of a property out of some 18,000 data elements skewed the results of the regression from a finding of an influence on sales price to no influence on sales price. Absent access to the Report data these and similar issues cannot be evaluated. There is no evidence in the Report that any sales confirmation work that might have revealed these issues was undertaken.

Peer Review

The authors of the Report claim it has been peer reviewed and the method and results are supported by the peer reviewed literature. Unfortunately this claim means far less than it seems. Peer review in the context of this Report and the referenced literature consists of the reading of the report by several presumably knowledgeable individuals and the provision of comments to the authors based on that reading, nothing more.11, 12, 13 The authors may or may not have addressed all of the issues raised by the comments.

What is missing from this process is any semblance of testing for the scientific validity of the results, a testing rendered impossible by the refusal of the Report’s authors to provide the underlying data. Absent the data it is not possible to independently validate the accuracy or reliability of the data, replicate the analyses, test alternative regression models (say models that meet IAAO standards), or calibrate the results against the real world market. Absent such scientific testing we have nothing more than opinion upon which to base an estimate of the credibility and applicability of the results.

At best a peer review–as that phrase is commonly used in this field–with respect to both the Report and the literature addresses only the acceptability of the paper for publication but does not in any meaningful way address the validity of the underlying work.

Hedonic Analysis

Hedonic analysis depends entirely on the accuracy and reliability of the underlying regression. If the regression does not conform to recognized standards then we have no independent assurance of that accuracy or reliability, as in this case.

Hedonic analysis also adds a new requirement, specifically that the coefficients of the explanatory variables of interest are quantitatively accurate and represent only the marginal contribution of that explanatory variable to the sales price. This is not a requirement of regression. In this case there is some doubt that the hedonic requirement has been met.

First, computer regression programs are mindless, they simply follow a set of instructions until they are fulfilled and then print the results. It is a simple matter to demonstrate that omitting or adding an explanatory variable will frequently influence both the magnitude and statistical significance of the other explanatory variable coefficients. It is also possible to include a totally meaningless explanatory variable and achieve statistical significance for its coefficient, making it appear meaningful. Absent the application of standards regressions may easily meet the needs of junk science.

Second the accuracy and validity of the coefficients of hedonic interest (in the Report the coefficients associated with View and Distance) must be separately tested to determine if they comply with the hedonic requirement of accurately and only representing the explanatory variables.

In the literature–as in the Report–the usual test employed is that of the statistical significance of the coefficient. Unfortunately all this test may tell us is that the coefficient is statistically unlikely to be zero.14, 15 Knowing that a number is not likely equal to zero does not tell us anything about what it does represent or its importance to an analysis.

To determine if the coefficient has any hedonic value the test must be for the economic significance of the coefficient. Specifically a proof that the coefficient accurately and only represents the marginal contribution to sales price for that explanatory variable, and that it is of sufficient magnitude to provide a significant impact on sales price. There is no evidence of such testing in the Report, or indeed in the referenced supporting literature.


1 Atkinson, Scott E.; Thomas D. Crocker, “A Bayesain Approach to Assessing the Robustness of Hedonic Property Value Studies,” Journal of Applied Econometrics, Vol. 2, 27-45 (1987).

2 Wilson, Albert; “Real Property Damages and Rubber Rulers,” Real Estate Issues, Summer, 2006

3 Standards on Valuation Models, IAAO.ORG

4 Gloudemans, Robert J., “Mass Appraisal of Real Property”, International Association of Assessing Officers, 1999–One of the basic IAAO training manuals.

5 SEE FOR EXAMPLE Rogers, Warren, “Errors in Hedonic Modeling Regressions: Compound Indicator Variables and Omitted Variables,” The Appraisal Journal, April, 2000

6 Rogers ibid.

7 Report page 134

8 “2009 Coldwell Banker Home Price Comparison Index,” as cited in CNNMoney.com “Same 4- bedroom house – Wildly different prices”, September 23, 2009.

9 “Standard on the Valuation of Properties Affected by Environmental Contamination”, IAAO.ORG

10 Cholvin, Brooke, Danielle Simpson, “Assessing Mortgage Fraud,” Fair & Equitable, IAAO, August, 2009

11 Chan, Effie J., “The ‘Brave New World’ of Daubert: True Peer Review, Editorial Peer Review and Scientific Validity,” New York University Law Review, April, 1995, 70, N.Y.U.L. Rev 100. ALSO, Haack, Susan, “Peer Review and Publication: Lessons for Lawyers,” Stetson Law Review, Vol. 36, 2007.

12 “The Editor reads each submitted manuscript to decide if its topic and content of the paper fits the objectives of JRER. Manuscripts that are appropriate are assigned anonymously by the Editor to one member of the Editorial Board and at least one other reviewer. … The referee presents a critique to the Editor who forwards it to the author. Each author should be encouraged to resubmit the manuscript for publication consideration. The Editor makes the final decision regarding re-submissions. …” Editorial Policy and Submission Guidelines, Journal of Real Estate Research, American Real Estate Society, Volume 31, Number 2, 2009.

13 “The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability–not the validity–of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we all know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.” “Genetically modified foods: “absurd” concern or welcome dialog?” Richard Horton, editor of Lancet, 1999; 354: 1314-1315

14 Although difficult to read the following covers both statistical and economic (scientific) significance in some detail, Ziliak, Stephen T., Deirdre N. McCloskey, “The Cult of Statistical Significance”, The University of Michigan Press, Series: Economics, Cognition, and Society, Ann Arbor, MI and particularly the reference materials cited.

15 NOTE that the null and alternative hypotheses in a test of significance are required to be mutually exclusive and collectively exhaustive. The test of significance for a coefficient uses the null hypothesis of equality to zero but the alternative hypothesis is rarely stated. It appears that the hedonic analyst uses the idea that if the null can be rejected, then the coefficient must represent the marginal contribution of that variable to the sales price. The correct alternative hypothesis is that the coefficient is simply not equal to zero and nothing more can be said.

APPENDIX: Professional Profile of Albert R. Wilson


Bachelor of Science in Science Engineering, Northwestern University, Evanston, Illinois; Master of Business Administration, Bowling GreenState University, Bowling Green, Ohio

Professional Experience


Various staff and operating management positions with PepsiCo (Division Consultant-Systems, Operations and Risk Management), Rentar Industries (Executive Vice President-Operations), and Home Window Company (Part Owner, President).


A. R. Wilson, LLC specializing in environmental financial risk management and impaired value analysis. During the period 1990 through 1994, acted as President of Environmental Analysis & Valuation, Inc., a consortium of environmental and appraisal experts focused on the development of value impact opinions for litigation support.

Professional Accomplishments

One of the primary developers of the theory, application and language of environmental impairment analysis. Primary strength has been in the development of a unified valuation impact opinion incorporating the expert opinions of appraisers, attorneys, accountants, historians, civil and geotechnical

Engineers, hydrogeologists, and other specialized professionals. These unified opinions have been highly successful in the courtroom and negotiated settlements.

Regularly lecture, testify, and write on the subject of environmental impacts on business enterprise and real property value. Published numerous articles in such forums as The Appraisal Journal, Journal of Property Tax Management, and Environmental Watch. Author of Environmental Risk: Identification and

Management which has become a text in several university level courses throughout the country.

Developed the Engineering Impaired Value Model for the analysis of impacts on business enterprise and real property values. This model provides the quantitative information necessary to support disclosures under SAB 92, and evaluate the financial impacts of alternative tax treatments under IRS Revenue Ruling 94-38.

Further professional details are available here.


1 Jon Boone { 02.22.10 at 8:41 pm }

Albert Wilson’s has provided commendable service here. To reinforce what he has said, I’m pasting below the remarks I made as an intervenor in a Maryland PSC wind hearing in which the wind company, Synergics LLC, introduced as supporting testimony another “government sponsored” wind property values assessment document, infamously known as the Renewable Energy Policy Project. Among its many claims was the finding that wind projects not only did not devalue nearby properties but, in some cases, actually raised them. It too was easily discredited. The effort of government to use public resources as it bends reality to sell its brand of wind soap remains contemptible.

Here’s what I said in 2005:

“One of the most validated real estate precepts is the idea that significant natural views have premium value, and intrusions which restrict that view erode value. Realtors doing business near windplants in the western United States and in Europe understand that property will sell for between ten and thirty percent less than previous market value, depending upon how close it is to the windplant. The few “studies” which appear to support the claim that windplants don’t devalue property are extremely flawed in fact and methodology, often surveying people and evaluating property miles away from a wind site, then “averaging” these results with properties adjacent to windplants.

The Renewable Energy Policy Project (May, 2003) study that Synergics offers on behalf of the claim that its project will not diminish property values contains serious methodological flaws:

1. The study covers just ten projects, only one of which comes close to the size and scope of Synergics’ project—and this site (Madison County, NY—the Fenner Site), with 20 turbines situated on farm fields—not atop tall ridgelines– interestingly showed significant decreases in property values.

2. The time frame of the study was so short that even the study’s authors were compelled to state the data was insufficient to offer compelling conclusions.

3. The study did not verify whether individual properties had a direct view of the windplants, making the use of the term “viewshed” something of a misnomer in this context, since the viewshed properties were actually all properties within a five mile radius of the turbines regardless of whether they had a direct line of sight. To mitigate this problem, the researchers conducted phone interviews with tax assessors and other local authorities to get estimates on the number of properties in the defined viewshed that might have had views of the turbines. However, under scrutiny, these interviewees provided inaccurate estimates.

4. The analysis used in this study did not incorporate distance from a wind development as a variable or weighting factor, so that a viewshed property sale five miles away from a development counted the same as one a quarter mile away. It is at least plausible that if wind developments do have an effect on property values, it would be strongest close to the turbines and decline with distance. Simple geometry suggests that the majority of properties in the area of a five mile circle are likely to be fairly distant from the wind development: 64% of the area of this circle is three miles or more from the center – and only 4% lies within the first mile. Though properties are not necessarily distributed evenly about the landscape, and property values conceivably can be affected by other things in the vicinity, the REPP study confuses substantially the proportion of properties that either have only a distant view of wind turbines or no view at all.

5. The study relied on average rates of sale prices before and after the wind development and between viewshed properties and properties in a comparison group. Therefore, if one calculates that sale prices among viewshed properties increased $50/month faster than sale prices in the comparison group, then it makes a difference whether the statistical uncertainty in the point estimate is plus or minus $25/month or $500/month. The former leads to a conclusion that the wind development unlikely had a negative effect on property values while the latter intimates that the data are inconclusive – there could be a large negative impact, a large positive impact or no impact at all. These “smoothed” average sale prices against a very small time variable creates a regression analysis which is, for prediction purposes, almost beside the point, suggestive of nothing.

The REPP “study,” although its basic methodological approach holds considerable promise, is severely flawed. To say, as Synergics does, that the study demonstrates its proposed windplant will have no effect on property values, that it may in fact enhance them, is disingenuous.”

Wilson, as a public service, has expanded on my observations and rather expertly exposed the methodological worms at the core of this latest NREL self-serving encomium for wind.

2 Wallace Kaufman { 06.22.10 at 11:15 am }

This study purports to show that markets for properties near wind farms behave differently than human beings who make the market. It wastes a lot of talent to do what can be done much more simply with real human beings who buy and sell homes. The study uses statistics not to clarify the complex but to obscure the obvious.

Find a few homes with good views of windmills. Find a few similar homes with good views of a similar landscape without windmills. Find buyers looking for such homes and let them bid. (Eliminate from the bidder pool all blind and/or deaf buyers, getting a waiver from the DOJ civil rights division if necessary.)

Forget houses a mile or more distant and with no sight or sound impact. These are brought in only to create distracting statistical scenery. I’ve appraised neighborhoods in airport noise cases, highway noise and scenic disturbance cases, power line impacts, etc., and I spent 30 plus years in the actual practice of real estate brokerage and appraisal. Out of sight is generally out of mind for the market. So mixing the data about homes within sight and sound and homes that are not, obscures the area where values would be most affected.

Next, in a given market the appearance of a nuisance or scenic disturbance generally makes a one time impact on the homes within the impact area–sight, sound, traffic, smell, etc. These homes fall below similar homes outside the impact area. After the nuisance is established, the value trends tend to run parallel.

In other words, $200,000 homes will be divided by the onset of a nuisance into, let’s say for example only, $180,000 and $200,000 homes. After that both sets will probably follow the same area price trend.

That’s an overs simplification, but not much. It sets out a hypothesis that I’ve seen validated in the field often. I’ll bet it would be validated clearly by any honest study of wind farm impacts.

So, the impacts, if I’m right, are twofold:
1. With onset of the windfarm, properties with changed view and sound regimes fall in value and separates the impacted area from the non-impacted area, creating 2 neighborhoods for valuation purposes.

2. The new impacted neighborhood will suffer no further damage, but will not return as much taxes to the govt.

3 Bill Chaffee { 08.06.10 at 6:26 pm }

If the sample size is large enough then a small number of deaf and or blind people won’t throw off the results. I don’t think that the wind industry cares about handicapped people, however they might use the above post to attack critics.

4 Bill Chaffee { 09.24.10 at 3:59 pm }

I’m the only person in my family that has not been allowed to see my mothers will or to know what’s in the estate. However I do know that she has given a lot to charity. When I brought up the subject of wind energy she said that gave to a wind company. She moved to Oregon about 11 years ago to be with my sister and family. Are wind energy projects considered to charities in the Northwest? It’s obvious to me that I’m from a family that has a negative attitude toward the disabled. I think that Wallace Kaufman was trying to make an innocent rhetorical point but it hit a nerve with me.

Leave a Comment