Posts from — September 2010
The American Clean Energy and Security Act (H.R. 2454, aka the Waxman-Markey climate bill) and the American Power Act (aka Kerry-Lieberman climate bill) both contained explicit provisions to create not just a U.S.-side cap-and-trade program for carbon dioxide (CO2) but also a single, transatlantic emissions trading scheme.
The problem is that even if cap-and-trade is dead in the U.S. Senate, its advocates remain committed and have options for international action. The more this can be understood, the more the electorate can reject U.S.-side help for a futile, costly international scheme to regulate CO2 in the name of “stabilizing” climate.
A clear warning that supranational cap and trade was planned beyond the EU’s borders came in a speech last May by the European Commission official Jos Delbeke, Deputy Director General DG Environment. He told an audience in Berlin that:
“We are confident that we now have a solid system from which we can link up to similar schemes elsewhere in the world. The EU’s goal is indeed a global carbon market.…
The EU ETS and the future U.S. cap-and-trade system – integrated into a transatlantic carbon market – can be the twin engines driving the OECD-wide carbon market. The progress on domestic legislation in the U.S. is an essential step in this regard and we are encouraged by Congressional timetables for getting draft legislation to a floor vote in the coming months.”
I recently spoke to Adam C. T. Matthews, the Secretary General of an organisation called GLOBE International – an environmental policy talking shop for parliamentarians, who told me that this this is something “which Europe was desperate for”. They think that the carbon market could be in serious trouble without it.
Those drafting cap and trade legislation in the United States haven’t let their European counterparts down. In Waxman-Markey there is explicit provision to link with other cap and trade schemes. The conditions are set out in Section 728. International Emission Allowances:
“The Administrator, in consultation with the Secretary of State, may by rule designate an international climate change program as a qualifying international program if—
(1) the program is run by a national or supranational foreign government, and imposes a mandatory absolute tonnage limit on greenhouse gas emissions from 1 or more foreign countries, or from 1 or more economic sectors in such a country or countries; and
(2) the program is at least as stringent as the program established by this title, including provisions to ensure at least comparable monitoring, compliance, enforcement, quality of offsets, and restrictions on the use of offsets.”
For approved programmes – and the intention is clearly that the European Union Emissions Trading Scheme (EU ETS), at least, would qualify – emissions allowances from abroad are interchangeable with those granted in the US. Section 722. Prohibition of Excess Emissions makes that clear: [Read more →]
September 30, 2010 2 Comments
Part I examined the true costs of ethanol and windpower to find that both were highly uneconomic compared to their alternatives. Both government-dependent fuels are also inferior products, making a straight comparative cost comparison misleading.
The environmental characteristics of both ethanol and windpower are also problematic compared to their more energy-dense, consumer-preferred alternatives.
Is Ethanol Green?
Given the high cost of the ethanol mandate, the putative benefits – energy independence, green jobs creation, environmental improvement – come at a steep price. But costs aside, there are other reasons to doubt whether these benefits are real. The gulf between hype and reality is perhaps greatest when it comes to environmental performance.
The negative environmental externalities associated with petroleum-derived fuels – particularly oil spills, air pollution, and greenhouse gas emissions – have long been a major focus of the environmental movement and federal regulators. Thus, many simply assumed that ethanol, by supplanting some of the gasoline supply, would be an improvement. Unfortunately, the mandate is teaching us, the hard way, that ethanol has plenty of its own environmental negatives.
Environmental organizations have raised concerns about the increased inputs of energy, pesticides, and fertilizer to grow the additional corn now needed to meet fuel as well as food demand. The same is true for the stress on water supplies, especially now that corn production has been expanded into locales where rainfall is insufficient and irrigation is needed. Land previously in its natural state has been converted to cropland. The facilities that distill the corn into ethanol also require significant energy and water inputs and produce industrial emissions.
The use of ethanol in motor fuel has had a mixed impact on air quality. It lowers some types of pollutants, such as carbon monoxide, but increases others, such as the evaporative emissions that contribute to smog. In fact, certain high-volatility components of gasoline must be removed before adding ethanol in order to prevent the overall blend from violating Clean Air Act requirements in high smog areas. [Read more →]
September 29, 2010 24 Comments
Repeating past mistakes is an unfortunate but common part of federal policy, and perhaps no more so than with energy. Indeed, much of the Obama administration’s “clean energy economy” and “energy independence” agenda is a virtual repeat of the follies from the 1970s – failed attempts by Washington to pick winners and losers amongst alternative energy sources and energy-using technologies, as well as taxes and regulations that exacerbated the very concerns they were supposed to address.
One of the Reagan Administration’s lesser-remembered successes was the repeal of much of this government meddling beginning soon after taking office in 1981. Reagan’s turn away from energy central planning and towards free markets brought down energy costs and helped launch a long period of economic growth.
Of course, this decades-old lesson may be lost on younger politicians, bureaucrats, and activists who seem unaware that their energy policy ideas are proven failures from the age of disco. But the same cannot be said of efforts to enact a federal renewable electricity standard (RES), as this would be a near-exact repeat of a blunder that was launched just a few short years ago – the renewable fuels mandate.
Part I of this two-part post will review the lessons of the RFM, or ethanol. Part II tomorrow will turn to windpower, the central energy of the RES. Indeed, the requirement that ethanol be added to the gasoline supply has quickly proven to be an economic and environmental failure. Congressional proposals mandating wind and other renewable sources of electricity show all the signs of becoming a similar flop, but with far more serious implications.
The True Cost of Ethanol
It should come as no surprise that the renewable fuels mandate has raised the cost of driving. After all, if ethanol was cost competitive with petroleum-derived gasoline, it would have caught on without substantial government intervention. Nonetheless, despite repeated promises during the 2005 energy bill debate to help provide relief for high pump prices, Congress mandated that a specified amount of renewable fuels –mostly ethanol derived from corn – be added to the gasoline supply.
The 2007 energy bill increased the mandate substantially. The law raised the targets to 13 billion gallons of renewable fuels in 2010 -12 from corn, and the rest from non-corn renewables like cellulosic ethanol and biodiesel. This is a near-tripling of ethanol use over the last five years. The mandate increases each year and will reach 36 billion gallons by 2022, with 15 billion gallons coming from corn and 21 billion from non-corn renewables. [Read more →]
September 28, 2010 10 Comments
We’ve all heard the pitch about how wind is free and that once a windpower facility is constructed, the cost of generation is appropriately set low thanks to no fuel expense. We’re also often reminded that no fuel cost means wind will help insulate consumers from wildly fluctuating energy prices.
The concept is easy to grasp, and rural communities considering whether to host a wind facility are likely to conclude that the project will produce local and regional benefits in the form of lower electricity bills.
The fact is, the price of electricity within a grid region is set at a single price known as the market-clearing price (MCP). In most organized electricity markets, electricity generators are encouraged to participate in a daily or day-ahead auction process whereby a uniform market price–the MCP–is established. The MCP is the offer price of the highest-priced generation accepted within the market.
Ross Baldick states in his paper, Single Clearing Price in Electricity Markets:
Consider a simple electricity system with baseload coal generators having low production costs of approximately $25/MWh, and gas-fired peakers having higher production costs of approximately $100/MWh. Off-peak, when demand is lower, only the coal generators may be necessary to meet demand. The market-clearing price for energy is set by the coal offer price, which can be expected to be around $25/MWh. However, on-peak, when demand is higher, both the coal and the gas-fired generation may be required to meet demand and the market-clearing price will be set by the offer of the gas-fired generation, which can be expected to be around $100/MWh. On-peak, both the coal and the gas-fired generation receive the market-clearing price.
How It Works
The New England ISO (ISO-NE) and New York ISO (NYISO)[i] typically operate using a day-ahead auction where generators are required to offer firm levels of production for each hour of the next power day. The energy price, in turn, is determined based on those bidding into the system; all generators receive the same price per megawatt hour of generation. Significant penalties are applied if a generator is unable to meet his commitment. [Read more →]
September 27, 2010 5 Comments
Energy and the Dodd-Frank Act: More Bad from the Party in Power (more employment for lawyers and consultants)
U.S. energy markets face a new regulatory framework arising from the failings of the financial sector. Trading costs will rise, threatening liquidity. However, many key elements of the Wall Street Transparency and Accountability Act of 2010 (Dodd-Frank Act) have been passed on to regulators. Their true nature will emerge only with time. The Act does little to streamline oversight activities, while the biggest problem may prove to be ‘regulatory creep’.
The Dodd-Frank bill cleared the U.S. House-Senate Conference Committee back on April 25 following intensive days of negotiation, lobbying, and a final all-night drafting session. The House of Representatives quickly approved the legislation, but it stalled in the Senate where a super-majority is required to avoid filibuster. After considerable maneuvering, the bill passed on July 15 and was signed into law by President Barack Obama the next week.
Many superlatives have been used to describe the Act – historic, all encompassing, groundbreaking, etc. But it is far too early to judge its long-term impact. This is because many, if not most, of the key elements have been delegated to regulators to work out over the next year. The Act only provides an outline. The U.S. Chamber of Commerce counts more than 350 rules and regulations, not to mention dozens of studies that regulators will be required to craft.
Like so many policy decisions taken in response to the 2008 financial crisis: “the can has been kicked down the road.” Now the real lobbying can begin.
Treasury on Top
For the energy sector, the Commodity Futures Trading Commission (CFTC) will make the key decisions. Its authority expands to include the market for Over-The-Counter swap trading, which until now has been largely unregulated. However, from another perspective, the independence of the CFTC and other Commissions has been constrained. The CFTC, along with nine other agencies and financial regulatory bodies, will form the body of a newly created Financial Stability Oversight Council. The Secretary of the Treasury chairs the Council.
To support the Council’s work the Act establishes an Office of Financial Research at the Treasury. According to the House’s press release, the Office will “be staffed with a highly sophisticated staff of economists, accountants, lawyers, former supervisors, and other specialists to support the council’s work by collecting financial data and conducting economic analysis.” For those not familiar with the bureaucratic vernacular, this means that the Treasury Department is more or less going to run the show.
A number of federal regulatory agencies – particularly the Securities and Exchange Commission – were criticized during the financial crisis for inadequate oversight and insufficient interagency coordination. For a time, Congress considered merging the SEC and the CFTC, or setting up completely new regulatory bodies. The FSOC may be a reasonable resolution of the obvious difficulties surrounding either of these alternatives; it remains to be seen.
The Act also mandates much closer direct coordination between the CFTC and the SEC. Indeed, much of the Act’s language provides parallel directions to the two Commissions, and it orders them to prepare a joint memorandum of understanding on jurisdiction. Clarity may be difficult to come by, since the line between securities and commodities has blurred and promises to become even fuzzier. This obfuscation is further compounded by the plethora of electronic exchanges, “dark pools,” bulletin boards, hedge funds, high-frequency traders, and related computer systems that have grown up around the OTC market. Attempts to rationalize regulation broadly through all markets will be a major challenge. [Read more →]
September 24, 2010 1 Comment
When will Democrats and true environmentalists wake up to windpower, or what Robert Bryce calls the ethanol of electricity? Industrial wind is a scam when seen in all of its dimensions–economic, environmental, and esthetic. Bryce has identified five myths of green energy–and post after post at MasterResource by Kent Hawkins, Jon Boone, and John Droz Jr. have shown that meaningful CO2 reductions from windpower are highly debatable.
Industrial wind is chock full of environmental negatives and isn’t nearly as effective at reducing air emissions than advertised. Big Wind is corporate welfare with companies like GE and FPL skipping their federal taxes. Wind today is the legacy of Enron, the Ken Lay model of political capitalism. Wind is an assault on lower-income energy users, not only taxpayers. (And Democrats are supposed to be for the little guy….)
Yet the Left marches onward with no inkling of a need–given their own purported values–to make midcourse corrections.
Industrial wind and on-grid solar were supposed to be competitive by now. Beginning in the 1980s, the (false) promises have come again and again from wind and solar proponents. Read the quotations here.
And now, desperation has set in for an industry that needs more government (point-of-a-gun) energy policy to continue its artificial boom. And so a fundraiser yesterday was held by the renewable lobby for Senate Majority Leader Harry Reid (D. Nevada) that caught the eye of the Wall Street Journal, which published this short op-ed, Why They Go Green: [Read more →]
September 23, 2010 10 Comments
Excerpt from Erich Zimmermann, World Resources and Industries: A Functional Appraisal of the Availability of Agricultural and Industrial Resources (New York: Harper & Brothers, 1933), pp. 556–58.
“The Place of Wind in Modern Energy Economy”
Not only are new uses of water as a source of energy being studied, but the power of the wind is likewise being subjected to renewed scrutiny. Two recent proposals are mentioned here in order to indicate the trend of this development. The first is a German proposal which was reported in a wireless from Berlin, February 11, 1932, as follows:
Harnessing the air for generating electric power is advocated by Hermann Honnef, an engineer, whose perfected designs for that purpose are engrossing the attention of scientists and technicians and may revolutionize the German electric industry. Honnef claims to have solved the technical difficulties in a way to efficiently convert the force of the wind into electric power and to overcome the drawback of the inconstancy of air currents which hitherto has been a handicap to the utilization of this source.
His plan is to tap the winds at altitudes of 1,000 to 1,400 feet by means of great steel towers equipped with gigantic windwheels several hundred feet in diameter. Such an aeroelectric unit, requiring 6,000 tons of steel for its construction, would generate 20,000 kilowatts a day, and so economically that a rate of less than a quarter of a cent per kilowatt hour can be figured out, the inventor asserts.
In expounding his project at the Physics Institute of the Charlottenburg Polytechnic, before physicists, electrical engineers and technical representatives of the Reich government, Herr Honnef emphasized that water power suitable for developing electricity was confined to certain localities and that hydroelectric plants were costly, whereas the winds were everywhere available and therefore the logical source for electric power. Forty to fifty of his power towers could be built annually in Germany, he said, and the low rate at which power produced by them could be furnished to consumers would lead to hitherto unthought [line missing]. He urged the immediate construction of a wind tower, preferably in Berlin, to serve the twofold purpose of initiating the new process and affording means for further observation and experiment. A representative of the Reich Transport Ministry suggested beginning with a smaller tower to be built for testing purposes.
The second proposal is based on the application of the rotor principle of Anton Flettner, whose ill-fated rotor ship attracted wide attention some years ago. It was seriously discussed by Waldemar Kampffert, an authority on scientific subjects, under the headline “Harnessing of Wind in New Jersey Plant May Hold Importance for Industry.”
An excerpt from the lengthy article follows: [Read more →]
September 22, 2010 3 Comments
The Holy Grail of climate change is a quantity known as the climate sensitivity—that is, how much the average global surface temperature will change from a doubling of the atmospheric carbon dioxide concentration. If we knew this number, we would have a much better idea of what, climatologically, was headed our way in the future and could make plans accordingly.
Thus far, however, this prize has been elusive. Back in 1990, in its very first Assessment Report, the Intergovernmental Panel on Climate Change (IPCC) suggested that the climate sensitivity was somewhere between 1.5°C to 4.5°C. In its latest Fourth Assessment Report published in 2007, the IPCC said the climate sensitivity was likely to be between 2.0°C and 4.5°C, and unlikely be to less than 1.5°C. Not a whole heck of a lot more certain than where things stood 20 years ago—and this despite a veritable scientific crusade to determine a more precise value.
A predominant member of the quest is the University of Alabama in Huntsville’s Dr. Roy Spencer. Dr. Spencer has, for several years now, been trying to untangle climate feedbacks from climate forcings. If apparent feedbacks are really forcings, or vice versa, then the determination of climate sensitivity is confused and prone to being wrong (and likely erring on the high side).
Dr. Spencer has long held that what has generally been taken to be a positive feedback from cloud cover changes in response to climate warming (i.e. cloud changes act to further enhance a CO2-induced warming) is actually the other way around—random cloud cover changes force temperature changes. However, trying to demonstrate that this is the case has proven challenging, and trying to convince the general climate community has been virtually impossible.
To help bring his ideas to a wider audience, Dr. Spencer has written a book about his hypothesis and his research in support of it, and has now, after years of tireless pursuit, published a paper in the peer-reviewed scientific literature.
Realizing that his findings run counter to the extant mainstream view of things, he has taken the step to ask for “physical scientists everywhere” to try to debunk his ideas. The appeal for scrutiny is intended to serve both science and Dr. Spencer in helping to solidify and illuminate a potential new way forward to finding the elusive Grail.
Recently, Dr. Spencer has written a nice summary of his on-going research and what, in his views are its implications. Rather than having me rehash his synopsis, Dr. Spencer has graciously permitted us to reprint a piece that originally appeared on his excellent website (a site well-worth checking from time to time).
Hopefully, readers of MasterResource will find this cutting-edge climate research interesting, and I am sure that if any of you have any pertinent suggestions for Dr. Spencer regarding his work, he would be happy to hear them.
Here is the excerpt: [Read more →]
September 21, 2010 9 Comments
[Note: This article has been updated to Twenty Bad Things about Windpower — go here.]
Trying to pin down the arguments of wind promoters is a bit like trying to grab a greased balloon. Just when you think you’ve got a handle on it, it squirts away. Let’s take a quick highlight review of how things have evolved.
1 – Wind energy was abandoned well over a hundred years ago, as it was totally inconsistent with our burgeoning more modern needs of power, even in the late 1800s. When we throw the switch, we expect that the lights will go on — 100% of the time. It’s not possible for wind energy, by itself, to ever do this, which is one of the main reasons it was relegated to the dust bin of antiquated technologies (along with such other inadequate sources like horse power).
2 – Fast forward to several years ago. With politicians being convinced by lobbyists that Anthropological Global Warming (AGW) was an imminent threat, a campaign was begun to favor all things that would purportedly reduce CO2. Wind energy was thus resurrected, as its marketers pushed the fact that wind turbines did not produce CO2 in their generation of electricity.
3 – Of course, just that by itself is not significant, so the original wind development lobbyists then made the case for a quantum leap: that by adding wind turbines to the grid we could significantly reduce CO2 from fossil fuel electrical sources (especially coal). This argument became the basis for many states’ implementing a Renewable Energy Standard (RES) — which mandated that their utilities use an increased amount of wind energy.
4 – Why was a mandate necessary? Simply because the real world reality of integrating wind energy made it a very expensive option. As such, no utility company would likely do this on their own. They had to be forced to. [Read more →]
September 20, 2010 40 Comments
Editor Note: This post complements a previous entry at MasterResource by Guillermo Yeatts,
Subsoil Oil and Gas Privatization: Private Wealth for the Common Good.]
Government intervention in free markets is prefaced on market failure. But no such rationale explains why federal and state governments have owned and managed hydrocarbon-bearing onshore and offshore lands. Government involvement can be explained by little more than the historical precedent of sovereign ownership of unowned property and of habit.
In a private property world, surface and subsurface areas would be unowned until the positive acts of discovery and intent to use. Under the “homestead” theory of first property title, the state of nature (unowned area) would not be the property of government but the first resource entrepreneur who, in the immortal words of John Locke, “tills, plants, improves, cultivates and can use the product of” the surface or subsurface to “enclose it from the common.”
Sovereign ownership would be displaced by a rational ownership system within the private sector, and individual accountability and economic incentives would reign over the inherent land-use conflicts on behalf of “all of the people.” The privatization process can follow many forms–such as a Cato Policy Analysis by Terry Anderson, Vernon Smith, and Emily Simmons, “How and Why to Privatize Federal Lands,” espousing a 20–40 year transfer. But other things equal, the sooner the transfer the better, so long as meeting the basic criteria as outlined by Anderson et al.:
- Allocation to the Highest-Valued Use
- Low Transactions Costs
- Broad Participation in Divestiture Proceedings
- Recognition of Squatters’ Rights
As it is now, government ownership of a resource transforms authorities into central economic planners to answer the questions of who does what, when, where, and how much. Such is the position of the Department of the Interior’s Bureau of Ocean Energy Management, Regulation and Enforcement (formerly the Minerals Management Service) in regard to offshore leasing and publicly owned onshore development.
If all subsoil rights had been socialized in the United States, a severe economic calculation problem would have existed for the Department of Interior. But a coexisting (and much larger) private lease market, at least on dry lands, has provided crucial information that Interior over many decades has used to make decisions.
Nonetheless, political control over swaths of mineral-bearing subsoil for over a century has led to administrative problems at Interior such as: [Read more →]
September 17, 2010 2 Comments