The Icarus Syndrome

William Ewart Gladstone, in his 1866 budget speech, warned that Britain faced the prospect of exhausting its domestic coal within a century and had poor prospects of finding sufficient alternative energy sources. This was not an idiosyncratic point of view. The year before, the economist William Stanley Jevons’s influential book The Coal Question had made the same prediction and proposed a set of policies to conserve coal for the inevitable lean times. John Stuart Mill supported both the thesis and Jevons’s proposals. Newspapers took up the “Coal Panic,” and a Royal Commission on Coal was created. Eventually the issue fizzled, and Britain moved on to other, more pressing, concerns.

It was true that British coal production could not grow indefinitely, and it did not. The essential points that Jevons missed, however, were the feasibility of displacing coal with petroleum as a source of energy and the decreasing centrality of low-cost coal to the industries that were to lead Britain in the 20th century. But then again, as the great physicist Niels Bohr reportedly said, “Prediction is hard, especially concerning the future.”

Successful modern economies create unprecedented wealth and material ease, but they also tend to generate characteristic anxieties. One of them is the recurring belief that the whole thing is a house of cards. Psychologically, this is the fear that there is some hidden danger that will cause modern society to collapse, and that we would have been better off if we had stayed lower to the ground and not tried to build such an overwhelming success. The most compelling of these stories often involve problems that the modern system has supposedly created itself. Call it the Icarus Syndrome.

The British Coal Panic might seem quaint from our vantage point more than a century later, but many similar fears are fashionable today. “Peak oil” is the almost perfectly analogous theory that the world is about to reach, or already has reached, maximum possible production of oil, and that we are about to experience rapid reductions in output with dire global consequences. Like Jevons, most peak oil advocates argue that we need to begin to husband our resources, rather than innovate our way around this projected problem.

The peak oil theory usually proceeds from the correct prediction in 1956 by Shell Oil geologist M. King Hubbert that oil production in the United States would hit its high sometime in the late 1960s or early 1970s. Advocates, however, much more rarely note that in 1974 Hubbert also predicted that global oil production would peak in about 1995. Whoops. It turns out that it’s more feasible to predict peak production in very well-understood geography, such as the United States, than for the world as a whole.

Unsurprisingly, the U.S. Department of Energy (DOE) has taken a serious look at this question. They project rising global production through 2030 and do not forecast beyond that date. The International Energy Agency, sponsored by the OECD in Paris, also projects rising production through 2030. So does OPEC. In 2005, Guy Caruso, the head of the Energy Information Administration, the responsible agency within the DOE, made his best guess that peak production would probably be reached “sometime in the middle of this century.”

Caruso identified 36 academic forecasts for peak oil published between 1972 and 2004. There is an obvious pattern in the data. Roughly speaking, academic forecasts indicate that we are about 20 years from peak oil today, just as such forecasts generally indicated that we were about 20 years from peak oil throughout the 1970s and 1980s. What if we had reacted to these earlier, incorrect predictions of resource exhaustion with, as many advocated at the time, government coercion to force a decrease in petroleum use and to limit growth? We very likely would not have found ourselves on the other side of one of the greatest periods of wealth creation in American history, and therefore would probably not be in the happy position of paying, even at 2008 prices, a smaller share of GDP for oil than we did in 1980.

There is a finite amount of oil in the world, so we will eventually reach a production maximum. We have, however, a very poor track record in predicting when this will happen, and the world’s leading experts will provide only the most general guidance that it looks like we probably have several decades of production growth in front of us. Much like the British looking forward from the 1860s, we don’t have a very good idea of what the technology landscape, and much else besides, will be when or if this occurs. Almost certainly, the best course of action is the simplest: Let markets integrate this information into prices for oil and alternative energy sources, and then let entrepreneurs use this information to guide the deployment of resources through markets.

The current concern over global warming is similar to the Coal Crisis and the Peak Oil debate. It also starts with a valid observation. Modern economies emit a lot of carbon dioxide (CO2), and all else being equal, the more CO2 molecules we put into the atmosphere, the hotter it gets. If we were to emit enough CO2 and drive temperatures up high enough, it would be disastrous for humanity. If you believe that such a disaster is in the offing, there is a fairly simple solution: Emit less CO2. The typical methods that are proposed to do this are either to tax carbon emissions or to introduce a “cap-and-trade” system (in less fancy language, to ration CO2 emissions and have the government auction off the ration cards). But this once again raises the huge question of prediction. Namely, how much hotter would our expected rate of carbon dioxide emissions make the world and how bad would this be?

The United Nations Intergovernmental Panel on Climate Change (IPCC) is the largest existing global effort to answer these questions. Its current consensus forecast is that, under fairly reasonable assumptions for world population and economic growth, global temperatures will rise by about 3˚C by the year 2100. Also according to the IPCC, a 4˚C increase in temperatures would cause total estimated economic losses of 1-5 percent of global GDP. By implication, if we had reached 3˚C of warming by 2100, we would be well into the 22nd century before we reached a 4˚C rise, with this associated level of cost.

This is a big problem for advocates of rapid, aggressive emissions reductions. Despite the rhetoric, the best available estimate of the damage we face from global warming is not “global destruction” but costs on the order of 3 percent of global GDP in a much wealthier world well over a hundred years from now.

One serious objection to this logic is that the forecasts for warming impacts might be wrong, and global warming could turn out to be substantially worse than the IPCC models predict. Now, climate and economics modelers aren’t idiots, so it’s not like this hasn’t occurred to them. Competent modelers don’t assume only the most likely case, but build probability distributions for levels of warming and associated economic impacts (e.g., there is a 5 percent chance of 4.5˚C warming, a 10 percent chance of 4.0˚C warming, and so on). So, the possibility of “worse than expected” impacts really means, more precisely, “worse than our current estimated probability distribution.” That is, we are concerned here with the real, but inherently unquantifiable, possibility that our probability distribution itself is wrong.

The argument for emissions abatement, then, boils down to the point that you can’t prove a negative. If it turns out that not just the best estimate, but even the outer edge of the probability distribution of our predictions for global-warming impacts is enormously conservative, and disaster looms if we don’t change our ways radically and this instant, then we really should start shutting down power plants and confiscating cars tomorrow morning. We have no good evidence that such a disaster scenario is imminent, but nobody can prove it to be impossible. Once you get past the table-thumping, any rationale for emissions abatement that confronts the facts in evidence is really a more or less sophisticated restatement of the Precautionary Principle: the somewhat grandiosely named idea that the downside possibilities are so bad that we should pay almost any price to avoid almost any chance of their occurrence.

One could argue that we should therefore push down carbon dioxide emissions far faster than the odds-adjusted risk of global warming costs appear to justify. How much faster? One widely discussed benchmark for a “safe” level of emissions is to set a target limit for atmospheric concentration of CO2 of no more than 150 percent of its current level. Suppose we did this via what most economists believe is the most efficient imaginable means: a globally harmonized and perfectly implemented worldwide tax on carbon. According to the modeling group led by William Nordhaus, a Yale professor widely considered to be the world’s leading expert on this kind of assessment, we, humanity, could expect to spend about $17 trillion more under such a regime than the benefits that we would expect to achieve. To put that in context, the annual GDP of the United States of America is about $13 trillion. That’s a heck of an insurance premium for an event so unlikely that it is literally outside of a probability distribution. But I can find major public figures who say that this level of atmospheric carbon dioxide is still too dangerous. Al Gore has proposed an even lower target for emissions that if implemented through an optimal carbon tax is expected to cost more like $23 trillion in excess of benefits. Of course, even this wouldn’t eliminate all risk, and I can find highly credentialed scientists who say we need to reduce emissions even faster. Once we leave the world of odds and trade-offs and enter the Precautionary Principle zone, there is no nonarbitrary stopping point. We would be chasing an endlessly receding horizon of zero risk.

But to force massive change in the economy based on such a fear is to get lost in the hothouse world of single-issue advocates and become myopic about risk. We face lots of other unquantifiable threats of at least comparable realism and severity. A regional nuclear war in central Asia, a global pandemic triggered by a modified version of the HIV virus, or a rogue state weaponizing genetic-engineering technology all come immediately to mind. Any of these could kill hundreds of millions of people. Specialists often worry about the existential risks of new technologies spinning out of control. Biosphere-consuming nano-technology, supercomputers that can replace humans, and Frankenstein-like organisms created by genetic engineering are all topics of intense speculation. Sometimes, though, we face monsters from the deep: The cover of the June Atlantic Monthly said of the potential for a planet-killing asteroid, “The Sky Is Falling!”

A healthy society is constantly scanning the horizon for threats and developing contingency plans to meet them, but it’s counterproductive to become paralyzed by our fears. The loss of economic and technological development that would be required to eliminate all theorized climate change risk or all risk from genetic and computational technologies or, for that matter, all risk from killer asteroids would cripple our ability to deal with virtually every other foreseeable and unforeseeable risk, not to mention our ability to lead productive and interesting lives in the meantime. The Precautionary Principle is a bottomless well of anxieties, but our resources are finite.

In the face of massive uncertainty, hedging your bets and keeping your options open is almost always the right strategy. Money and technology are the raw materials for options. The idea of the simple, low-to-the-ground society as more resilient to threats is, like the story of Icarus, a resonant myth. But experience shows that wealthy, technologically sophisticated societies are much better able to withstand resource shortages, physical disasters, and almost every other challenge than poorer societies.

Consider that if a killer asteroid were actually to approach the Earth, we would rely on orbital telescopes, spacecraft, and thermonuclear bombs to avert disaster. In such a scenario, we would be very glad that we hadn’t responded to the threat of peak coal back in the 1860s by slowing our development to such an extent that we lacked one of these technologies. In the case of global warming, a much more appropriate approach than rationing energy and forgoing trillions of dollars of economic growth is to invest a fair number of billions of dollars into targeted scientific research that would give us technical alternatives if a worst-case scenario began to emerge.

We should be very cautious about implementing government programs that require us to slow economic growth and technological development in the near-term in return for the promise of avoiding inherently uncertain costs that are projected to appear only in the long-term. Such policies conceal hubris in a cloak of false humility. They inevitably demand that the government coerce individuals in the name of a nonfalsifiable prediction of a distant emergency. The problem, of course, is that we have a very bad track record of predicting the specific problems of the far future accurately.

We can be confident that humanity will face many difficulties in the upcoming century, as it has in every century. We just don’t know which ones they will be. This implies that the correct grand strategy for meeting them is to maximize total technical capabilities in the context of a market-oriented economy that can integrate highly unstructured information into prices that direct resources, and, most important, to maintain a democratic political culture that can face facts and respond to threats as they develop.

Jim Manzi is chief executive officer of an applied artificial-intelligence software company.

Related Content