Modern technical innovations operate unlike the traditional, pre-industrial advances: they too have their phases of gradual improvements based on tinkering and everyday experiences with running a machine or a process. But the initial accomplishments result almost invariably from deliberate and systematic pursuits of theoretical understanding. Only once that knowledge is sufficiently mastered the process moves to its next stage of experimental design followed by eventual commercialization.
That is precisely how Charles Parsons, Rudolf Diesel, and their collaborators/successors invented and commercialized the two machines that work–unseen and unsung–as the two most important prime movers of modern economies:
steam turbo-generators, which still generate most of the world’s electricity and
diesel engines, which power every tanker and every container ship besides energizing most of the trucks and freight trains.
The process of process is also how we got gas turbines (jet engines) and nuclear reactors, and many other taken-for-granted converters and processes. Ditto for solid state electronics that has evolved from crude transistors in the Bell Laboratories in the late 1940s to the now ubiquitous microprocessors.
Unfortunately, this conquest of the modern world by microchips has helped to create a warped image of a universally accelerating technical progress, one that has been unthinkingly promoted both by computing gurus (Ray Kurzweil makes perhaps the most egregious claims, as he believes that the 21st century will be equivalent to 20,000 years of progress at today’s rate of advances) and politicians (nobody can compete with Al Gore in this category with his call for completely repowering America in just one decade).
(In the corporate world, Ken Lay’s Enron was the master of revolution always (and if you asked questions you ‘didn’t get it’). We know how that ended up. Bernie Madoff too had the magic formula.)
I have named this delusion Moore’s curse because (unlike the crowding of transistors on a microchip) it is fundamentally (that is thermodynamically) impossible for the machines and processes that now constitute the complex infrastructure of global energy extraction, conversion, transportation and transmission to double their capacity or performance, microchip-like, every 18-24 months.
Moreover, most of today’s energy infrastructures and processes have required very long time spans between the formulation of their basic idea, the first attempts at their commercialization and their eventually significant penetration of respective markets (that is capturing at least 10-15% of the total demand).
Two famous examples of such lengthy gestation periods include fuel cells and solar photovoltaic cells. The principle of the first conversion has been known since 1839, first practical attempts to use it followed only in 1889, and the first serious attempts to deploy fuel cells as prime movers of road transportation came a century later (during the 1990s) –- but we still do not have any affordable fuel cell-powered vehicles.
As for the PV cells, the photovoltaic effect was discovered in 1839, the first practical cells were made by the Bell Laboratories only in 1954, and the global total of installed peak PV power reached about 20 GW in 2009 –- but we still have globally less than 0.5% of all installed capacity (and hence even less of actual generation) in direct capture of insolation.
Liquefied Natural Gas (LNG)
While the promise of the above two conversions still lies ahead, there is an even more important example of an energy industry whose theoretical foundations are almost as old as those of fuel and PV cells but whose promise has, finally, reached a true commercial maturity and whose future should be a decisive component of the global energy supply for decades to come.
The road toward global liquefied natural gas (LNG) industry began in 1852 with the pioneering work done by Thomas Joule and William Thompson (later Lord Kelvin) on liquefaction of gases. They demonstrated that as a highly compressed air flows through a porous nozzle, it expands to the pressure of the ambient air and cools slightly. Repeating this sequence creates a cooling cascade and the gas will eventually liquefy.
Practical designs for commercial liquefaction of oxygen and nitrogen, the two gases most often required in a variety of industrial processes, followed during the last three decades of the 19th century. The most important contribution was made by Carl von Linde whose patented process (in 1895) combined the Thomson-Joule effect with countercurrent cooling with compressed air. Liquid oxygen and nitrogen became readily available after WW I. But at that time the US was the only large-scale user of natural gas and it had abundant domestic supplies, so there was no need to think about any long-distance overseas imports that would require methane liquefaction. This explains why Godfrey Cabot’s patent for handling and transporting liquid natural gas, issued by the US Patent Office in 1915, was just another quaint proposal leading nowhere.
Eventually, in 1939, the world’s first small LNG storage was built in West Virginia, and a larger one, designed to supply the fuel during the periods of peak demand, followed in Cleveland in 1941. Three years later one of its steel tanks failed, the vaporized methane ignited and killed 128 people. For more than two decades afterwards –- and despite the fact that the accident analysis found a substandard tank design and concluded that gas liquefaction and storage are not exceptionally dangerous –- this accident was cited as a proof of LNG’s being an inherently risky proposition. But this fear made little difference as the post-WW II world was flooded with cheap crude oil (of which the US continued to be the largest producer) and as rising US gas supplies had postponed the beginning of LNG era for another generation.
The final steps toward a practical innovation began with European needs: in 1959 Methane Pioneer, a converted WW II Liberty class freighter, carried a demonstration shipment of just 5,000 m3 of LNG from Lake Charles, Louisiana, to Canvey Island on the Thames near London. The world’s first methane liquefaction plant was completed in Arzew, Algeria, five years later and the UK began importing the gas with two specifically designed tankers (Methane Princess and Methane Progress), each carrying 27,400 m3.
This was followed by the Japanese LNG imports from Alaska in 1969 and the French imports from Libya in 1970. But, yet again, circumstances became unfavourable and another delay set in when the European market was flooded with the gas from the Dutch supergiant Groningen field and from the North Sea. LNG imports became uncompetitive and the Arzew-Canvey contract was not renewed in 1979. And the plans for large-scale exports to the US also went nowhere: the peak consumption of 1979 was not surpassed until 1992 and then the post-1993 wellhead price deregulation boosted the domestic supply.
This left Japan (without any domestic gas resources) as by far the world’s largest LNG importer: by 1984 its purchases accounted for 75% of all LNG trade, by 1999 they were still 66% of the total. Taiwan began importing LNG in 1990 and South Korea in 1991 but the LNG trade remained a province of uncompetitive long-term contracts served by dedicated plants and tankers. This market stagnation stalled technical innovation: for more than a generation, between the mid-1960s and the late 1990s typical capacities of LNG trains (liquefaction units) remained at just 1-2 Mt/year and although the largest ship capacities increased fairly rapidly during the first decade of LNG trade (to 126,227 m3 in 1975) three decades later the dominant sizes (largely due to the Japanese restrictions on the maximum tonnage of LNG tankers) were still between 125,000-130,000 m3.
The number of LNG-importing countries rose only slowly, from one in 1964, to six by 1980 and 12 by the year 2000; the total LNG trade surpassed 50 Mt/year only by 1991 and it was only in 199 when the LNG tankers carried more than 5% of all exported gas. Then, finally, came a rapid change. Qatar’s LNG exports began slowly in 1997, Trinidad and Tobago exports came two years later, followed by Nigeria and Oman in 2000, Egypt in 2005, Equatorial Guinea in 2007 and Russia (from Sakhalin) in 2009. Maximum LNG train sizes rose to 5 Mt/year by 2005 and to more than 8 Mt/year by 2008 and decreasing costs of train construction and new, less expensive, designs of larger tankers resulted in bold plans for further expansion.
Large aluminum spheres (Kvaerner-Moss shells introduced in 1971) covered with insulation inside steel tanks and bolted to the vessel’s hull dominated the LNG tanker design for more than three decades. They wasted storage space and increased typical draft, making large-volume vessels impractical. In contrast, membrane design (with insulated tanks of thin stainless steel shaped to fit the inner hull) allows for ships larger than 200,000 m3 and Qatargas will eventually have 45 Q-Flex (210,000 m3) and Q-Max (266,000 m3) ships. Thanks to these innovations LNG has finally emerged as a globally available fuel that could be traded competitively on an intercontinental basis. Total export capacity rose from 100 Mt/year in 2000 to about 250 Mt/year by 2009 when the global LNG trade carried nearly 30% of all internationally traded natural gas.
The history of gas liquefaction and the LNG industry is an excellent illustration of lengthy time spans characteristic of major resource or prime mover shifts in large-scale energy infrastructures. Theoretical foundations of the liquefaction of gases were laid down 43 years before the first fundamental air liquefaction patent; the first LNG shipping patent was registered 44 years before the first intercontinental trial delivery of the gas; and it took more than 40 years after the first commercial delivery before LNG shipments rose to 25% of all internationally traded natural gas.
These time spans, rather than any delusions about accelerated innovations and rapid doubling, are characteristic of energy transitions that bring fundamental changes in large-scale energy supply. Energy transitions are inherently gradual processes and this reality should be kept in mind when judging the recent spate of claims about the coming rapid innovative take-overs, be it of nationwide electricity generation by wind turbines or of global passenger car market by electric vehicles.
And the history of LNG also reminds us that adoption of important innovations is subject to unpredictable events and hence it often does not move along any predictable (initially exponential, eventually logistic) path. When the commercial shipments of LNG began during the early 1960s who could have predicted the two OPEC-driven crises of the 1970s, record oil prices arising from these actions, their collapse after 1985, their generation-long doldrums and post-2006 resurgence? And who was factoring in the deregulation of US natural gas prices and the consequent boost of the domestic extraction, or the emergence of horizontal drilling and the importance of nonconventional gas resources?
Because of such unpredictable changes the industry launched in 1964 still moved only about 2% of all traded gas by 1980 and no more than 5% of all natural gas exports by 1999. And the unexpected interventions continue: by 2007 nothing seemed to able to stop an early emergence of a very substantial global LNG market.
A New Reality Check
But then those rapid LNG capacity increases collided with a combination of the lower demand resulting from the economic crisis of 2008–2009 and of the decline of U.S. gas imports due to increased extraction of unconventional resources. The pace of global LNG diffusion has slowed down once again.
These unpredictable changes are also an inherent part of major energy transitions: think of them every time you see those smoothly ever-rising graphs of production, capacity or generation forecasts for new energy sources or conversions in the years 2025 or 2050.
Still, LNG technology–and a lot of sunk-cost infrastructure–is ready when the economics give the ‘go.’ The same cannot be said for other energy technologies that may be more politically favored.
Vaclav Smil is a Distinguished Professor at the University of Manitoba in Canada. He is the author of 30 books on energy, environment and history of technical advances. This post is based on research from two of his forthcoming books, Energy Transitions and Energy Myths and Realities.
I should mention that Professor Smil, one of the most prolific energy writers of our era, has joined MasterResource as a Contributor. All of us look forward to his occasional posts (he stays busy writing books!) in the months ahead.