“In the long run, [government] subsidies can stifle technological progress and retard true commercialization. If state-of-the art technologies find a market, some of the private incentive for further improvement is dissipated. The acceptable becomes the enemy of the better, because individual firms come to have a stake in present technology. Minor improvements will be made to stay ahead of the competition, but there is little motivation toward major steps away from a successful line of business. Once a basic design is established, it also becomes more difficult for federal research and development managers to support radically different approaches to the same problem. There is fear of appearing foolish, hesitation in seeming to second-guess prior decisions, concern about upsetting investment in the operating technology, and pressure to satisfy competing demands for funds to support marginal improvements to current practice.”
– Sam Schurr et. al., Energy in America’s Future: The Choices Before Us (Baltimore: The Johns Hopkins University Press, 1979), pp. 534–35.
The Fukushima Daiichi nuclear disaster in Japan has once again caused many to question the appropriateness of nuclear power. The timing is significant: in the U.S., Georgia Power has placed an order for two nuclear reactors, while other utilities are considering the possibility of constructing new reactors for the first time in decades.
To move the process along, the Obama administration has increased the amount of loan guarantees available to utilities with the knowledge that the cost of a new reactor will run between $6 and $8 billion. Obama’s last proposal was to increase the loan guarantee kitty by $36 billion (to $54 billion). That would be enough to jump-start between seven and ten new reactors, according to the Department of Energy.
Obama’s unveiled his new energy plan earlier in the year. But with Fukushima and with federal budget cuts in vogue with both political parties, the odds have worsened that Georgia Power’s Vogtle Units 3 and 4 (and other proposals) will become the nation’s first new nuclear power units to operate in more than 30 years.
Clearly the costs, along with renewed environmental concerns, make the notion of any revival tenuous. A larger issue is why a revival is necessary from a technology that promised so much. Looking back, the events at Three Mile Island in 1979 come to mind. Yet, the decline of nuclear power started before Three Mile Island. Consider that until just recently, no nuclear plant had been ordered since 1978.
In reality, the decline of nuclear power can be traced to government policies that emerged during the 1950s that led to the demise of the technology and from which it has never fully recovered.
Getting on the Wrong Track
As envisioned in the 1950s, nuclear power promised low-cost abundant energy that would usher in an era of unprecedented economic prosperity, while at the same time demonstrate American technological superiority during the Cold War.
To ensure that this would happen, the federal government implemented a series of experimental and demonstration programs. The programs began in 1954 and lasted into the late 1960s with the construction of larger demonstration plants hoping to show that nuclear power was, at the very least, technically viable.
At the time, the only reactors capable of rapid development were of the light water variety due largely to the early development of the Nautilus submarine reactor. These government initiatives, which were carried out in partnership with electric utilities and reactor manufacturers, did produce a rapid scale of the technology with more than a five-fold increase in reactor capacity in ten years.
Government attempts to rapidly develop nuclear power prompted the reactor manufacturers to aggressively pursue the technology. In this regard, General Electric and Westinghouse entered into an intense competition to sell reactors. Larger reactors were constructed along with assurances that the technology would prove to be competitive with fossil fuels.
Unfortunately, these claims were not based on sufficient construction or operating experience. Nevertheless, the utilities were eventually convinced, and by the late 1960s large numbers of reactors were ordered, few of which proved to be economical and resulted in higher utility bills for consumers.
In retrospect, government initiatives, which had the dual purpose of achieving technical superiority and commercial viability, were impossible to achieve. Technical superiority was achieved because the government poured millions of dollars into reactor development over a short period of time. On the other hand, commercial viability would have required a slower and more cautious approach where investments would have been made based on experience and demand-side forces rather than government stimulation and prodding.
“Atoms for Peace” (Eisenhower)
The initial push for nuclear power can be traced to President Eisenhower’s “Atoms for Peace” speech delivered before the United Nations in 1953. The crux of the speech urged that mankind put something so destructive to positive uses. The president also hoped that by developing the peaceful atom, the United States would demonstrate its technical prowess and leadership as the Cold War was being waged.
The tone of the speech might have suggested a significant government initiative but such was not the case. The Eisenhower administration hoped that the speech would motivate the utilities to go nuclear with little government assistance. At the time however, conventional fossil fuel technology met demand as prices for electricity declined.
Nonetheless, the political clamor in Washington was so intense for a government-driven initiative that the White House announced a five-year reactor-development program in 1954. The program would test several reactor designs to determine which one(s) held the greatest potential for commercialization.
It soon became apparent that the only reactor capable of rapid development was the light water variety largely due to the development of the Nautilus submarine reactor. A scaled-up version of that reactor would power the first commercial nuclear plant at Shippingport, Pennsylvania, which opened in 1957 and worked quite well. The other reactor designs experienced various problems and were years away from commercialization.
In more normal times without the tensions of the Cold War, the initial experimental program might have been enough, yet many in Washington felt the program was anemic and wanted a major effort where the government would construct and operate plants. To off-set the push for Nuclear TVAs, the administration announced its “Power Reactors Demonstration Program” in 1955, where the government would partner with manufactures and utilities to build larger reactors.
Two other rounds were eventually added, which would last into the 1960s. Although utility interest was far from overwhelming, a number of reactors were constructed. The most notable outcomes of the demonstration programs were the rapid increase in reactor size from the 60 megawatts to 330 megawatts in ten years, and the fact that only light water reactors were marketable.
On balance, the various government initiatives failed to spur much utility interest but such was not the case with the reactor manufacturers. Beginning in the mid-1950s, General Electric and Westinghouse competed intensely for reactor sales but few were forthcoming until 1963. In that year, General Electric made a “turnkey offer” to build a large nuclear plant in New Jersey for a fixed price of $66 million at operating costs supposedly competitive with coal-fired plants. Westinghouse felt compelled to match the offer and in all 13 “turnkeys” were built at losses estimated to be as high a $1 billion to the manufacturers. Notwithstanding, the “turnkey offers” had the desired effect of luring the utilities. In 1966-67 alone, 49 plants were ordered that were several times larger than anything in operation.
With relatively little experience in constructing large facilities, costs were often underestimated by a factor of two. By the mid-1970s, fifty plants were operating, but none ordered after 1968 had gone on line.
Ever-escalating costs caused utilities to pull back and 75 plants were eventually canceled. The reactors that were completed averaged $800 million more than the original estimates. One writer called this “the largest managerial disaster in business history,” which necessitated rate hikes for consumers but resulted in large profits for manufacturers.
The federal government’s attempt to jump-start nuclear power to achieve technological superiority, however, clearly succeeded in one sense. American light water technology was eventually adopted around the world. The bad news was that it was radically uneconomical. More economical nuclear power required a different approach.
In retrospect, the government’s first five-year program in which several reactor designs were tested was on the right track. With potential dangers involved, the government must ensure the relative safety of each design. Reactor designs that don’t perform well can be dropped or further research and development can be conducted. If a design had performed satisfactorily, the government’s function should shift from an experimental to a regulatory role. Decisions about constructing larger plants would have been left to the private sector subject to sensible regulations.
If such a policy had been pursued, those making investment decisions would have assessed the economic potential involved as opposed to excessive government stimulation, which rushed the technology into production before it had become a routine industrial process, and is the ultimate reason why a revival is necessary.
William Beaver Ph.D. is a professor of social science at Robert Morris University located near Pittsburgh. He has written a book and a number of articles on nuclear power. This post is adapted from his essay of the same title, The Failed Promise of Nuclear Power, published in the Independent Review [Vol. 15; No. 3 (Winter 2011), pp. 399–411.]