The funny thing about carbon pricing is that even if you take the latest IPCC report as gospel, and even if you assume all of the governments around the world implement a perfectly efficient carbon tax, even so the “efficient” carbon tax ends up being fairly low for a few decades, and then it ramps up as atmospheric concentrations increase. (See William Nordhaus’s new book treatment [pdf] of his “DICE” model for an excellent exposition.) The intuition behind this result is that even the scary projections of catastrophic climate change don’t occur for more than one hundred years, and so discounting these future damages to the present leads to a modest externality from current emissions of another ton of carbon dioxide.
This phenomenon explains the fury with which partisans in the climate change debate argue over the proper “social discount rate.” The very aggressive policies recommended in the Stern Review, for example, are almost entirely driven [.pdf] by Stern’s use of a philosophically derived (low) discount rate, versus Nordhaus’s use of market-based interest rates. A given dollar-amount of climate damage occurring in, say, the year 2200 justifies a much bigger diversion of resources today, if we use a discount rate of 1% versus a discount rate of 4%.
There is an entirely separate front in the climate battles, however. Harvard economist Martin Weitzman has been challenging the very applicability of cost-benefit analyses as described above. Weitzman argues [.pdf] that the standard economic modeling of behavior under uncertainty works well enough when the distribution of outcomes can be approximated by a bell curve. (I’m dumbing this down a bit; it’s not really a bell curve that is required.)
But the possible damages from climate change are not like an insurance company’s possible damages from clients’ heart attacks. Instead, there are catastrophic outcomes with small probabilities; the proverbial “black swan” so beloved by financial maverick Nassim Taleb.
Weitzman argues that in this situation, standard cost-benefit analysis (CBA) breaks down. When some of the potential outcomes involve the deaths of hundreds of millions of people, not to mention the destruction of the world economy, Weitzman says that it is worse than useless to robotically assign a numerical value to these losses, and then discount exponentially at whatever rate one decides is relevant.
As one might expect, the alarmists in the climate change debate have seized upon Weitzman’s results, because they can use him to knock out the standard models which cannot be tortured into supporting the aggressive emissions cuts that the alarmists favor. The excitable Joe Romm’s discussion of Weitzman illustrates this perfectly (bold in original):
If we don’t keep concentrations below a threshold like 450 ppm (or lower), we face the prospect of essentially incalculably large damages that might well get worse and worse for centuries.
What exactly is the cost of sea level in 2100 of 5 feet rising therafter 10 inches or more a decade (potentially reaching 20 inches a decade or higher) until the planet is ice free in several centuries and sea levels are 250 feet higher? That is certainly a plausible scenario on our current emissions path. Indeed, once again, I’d call it business as usual. If any of you economists out there have a plausible net present value of the cost of that outcome, I’d love to see it. Same for one third of the planet becoming a permanent desert and large parts of the ocean becoming hot, acidic dead zones.
That is why essentially every cost-benefit analysis on climate in the literature is wrong and useless and hence very dangerous if taken serious by policymakers.
To repeat, this dismissal of CBA itself is necessary if one wants to push through an extreme mitigation program. There is no way to come up with a justification for doing the things Romm wants to do, except by adopting Romm’s tactic of threatening extinction unless policymakers do exactly what he says.
In the present post, my purpose is merely to explain what Weitzman’s work is about, and further to explain its relevance to the climate change debate. For those able to parse mainstream economics, I point out this fantastic rebuttal [.pdf] to Weitzman by Nordhaus. (As I wrote to Nordhaus when he included me in a distribution of the early draft of this reply: “This is fantastic! I am now prepared to forgive your textbook collaboration with Samuelson.” Not an exact quote, but I did send something comparably wise-alecky.)
Basically, Nordhaus shows that Weitzman’s formal result isn’t as general as one might have supposed. In other words, Weitzman did not prove that anytime one has “fat tails” in the distribution, CBA breaks down. I will close with the following excerpt from Nordhaus’s paper which illustrates one of the tricks Weitzman used to get his result:
The CRRA functions that Weitzman analyzes (with ? >1) assume that zero consumption has utility of minus negative infinity (and unbounded positive marginal utility) as consumption goes to zero. This has the unattractive and unrealistic feature that societies would pay unlimited amounts to prevent an infinitesimal probability of zero consumption. For example, assume that there is a very, very tiny probability that a killer asteroid might hit Earth, and further assume that we can deflect that asteroid for an expenditure of $10 trillion. The CRRA utility function implies that we would spend the $10 trillion no matter how small was the probability. Even if the probability were 10^(-10), 10^(-20), or even 10^(-1,000,000), we would spend a large fraction of world income to avoid these infinitesimally small outcomes (short of going extinct to prevent extinction).
An alternative would be to assume that near-zero consumption is extremely but not infinitely undesirable. This is analogous in the health literature to assuming that the value of avoiding an individual’s statistical death is finite. To be realistic, societies tolerate a tiny probability of zero consumption if preventing zero consumption is ruinously expensive.