Mises Daily

Economic and Climate Models

In a recent post on the popular blog Marginal Revolution, a reader asked GMU professor Tyler Cowen whether his experience with economic models shed any insight on the promise or pitfalls of climate models. I think the question is brilliant, and (in my immodest opinion) far more interesting than the answer Cowen gave. In the present article I’ll attempt to give the question the fuller treatment I believe it deserves.

The “rigorous,” peer-reviewed Keynesian economic models reached the zenith of their professional success in the 1950s and 1960s, and yet, in retrospect, many economists would now admit that they were fundamentally flawed and provided horrible policy recommendations. Despite the obvious differences between the disciplines, this sorry episode from the field of economics should counsel caution before we hand over even more power to the politicians due to the results of mainstream climate models.

What Went Wrong in Economics

Professional climatologists are aware of the analogy between economic and climate models, and they (correctly) point out that the social sciences are far less susceptible to mathematical description and computer simulation than are the natural sciences. Even though there are some loose joints in climate models, they are still built up from the laws of physics. In contrast, there is nothing analogous in a macroeconomic model, especially a “hydraulic” Keynesian one from the 1950s. Even the tautologies from microeconomics — such as the fact/postulate that a consumer maximizes his utility subject to his income — weren’t in the earlier macro models. To put it in other words, macroeconomists had (and continue to have) a lot more leeway or degrees of freedom when constructing their models, compared to the climatologists.

Having conceded that there is more of an “idiot check” on climate models, we should still remind ourselves of what went wrong in the economics profession. Climatologists shouldn’t comfort themselves by saying, “Well gee whiz, our PhDs don’t do anything that sloppy!” What happened in economics was that pioneers such as Paul Samuelson and Kenneth Arrow showed how elegant economic models could be. Sure, some of the wisdom of the old school “verbal” economists was lost in the translation, but oh, how rigorous and precise was the new approach! Although government funding certainly had something to do with the transformation, I think some Austrian economists underestimate the seductive power of formal modeling, and how a rising generation of the brightest (and mathematically adept) economists were drawn to the work of Samuelson, et al.

Of course, in retrospect these whiz kids at MIT and Harvard were giving the government horrible policy recommendations (and, in general, they still are). Even on its own terms — i.e., not just within the whiny circles of Austrians — mainstream macroeconomics was very crude in those early decades. Macro theorists now understand the importance of putting “micro” foundations (i.e., maximizing agents) into the model; Milton Friedman and others showed the dangers of relying on short-term correlations, especially the Phillips Curve; and Robert Lucas showed the contradictions in assuming people in the economy don’t react when government policies change. Today’s macroeconomists understand that it was poor central banking and fiscal policies that led to the stagflation of the 1970s. And yet, the government’s awful decisions weren’t due to its flip dismissal of the economists. The government’s actions, though certainly influenced by petty politics, were still consistent with “expert” macroeconomic thinking. Nobel Laureate Samuelson and the other top academics weren’t warning of stagflation when Nixon took the United States off the gold standard.

To be sure, there were some Keynesian skeptics, some who denied the entire legitimacy of the macro models with their focus on boosting aggregate demand to ensure full employment. But if a Murray Rothbard blamed recessions on unsustainable investments in higher order goods, or an Israel Kirzner stressed the importance of entrepreneurship, someone like Samuelson could quite correctly say, “Look, you can write up your cute little verbal critiques of our models all you want; we never claimed they were perfect. Maybe it’s true — though I doubt it — that all of us Nobel laureates are totally wrong about this stuff. So if you want to take the latest Solow model and incorporate a Hayekian triangle, you are free to do so — though Hayek himself found the task so formidable that he switched to writing about the human mind. And as far as entrepreneurship, go ahead and write up a model that is both interesting and rigorous, which shows agents discovering their ignorance over time. Once you write that up with Greek letters and get it published in a top journal, we’ll talk. Now if you’ll excuse me, I have to tell Tricky Dick how much to boost aggregate investment this quarter.”

What May Have Gone Wrong in Climatology

Naturally I have framed the fanciful description above in order to parallel the arguments in climate science. You have a few big guns such as Richard Lindzen and Roy Spencer who really do publish peer-reviewed papers critical of the “consensus”; Lindzen is even a professor of meteorology at MIT. Yet they are considered smart but ideologically driven, just as Milton Friedman was undeniably a sharp guy cranking out papers from the University of Chicago, and Hayek even won a Nobel Prize. Despite their credentials, Friedman and Hayek were no serious challenge to the Keynesian orthodoxy since, after all, they wrote for the layperson who opposed macro fine-tuning on ideological grounds.

My current job requires me to at least familiarize myself with the typical climate scientists’ case for anthropogenic (a fancy term for “manmade”) global warming. The deeper I get into the literature — especially when reading their responses to skeptics — the more analogous I think the situation is to the episode in economics sketched above. Richard Lindzen, for example, thinks that the cutting-edge models do not correctly model certain processes in the atmosphere at the “micro” level. Orthodox climatologists concede the point, but then challenge Lindzen to tweak their models in order to come up with a better simulated fit with historical observations (on temperatures, rainfall, etc.). Thus far Lindzen has been unable to do this, because the fastest computers would not be able to run a simulation of the entire world, at a scale small enough to capture the effects Lindzen points to, and obey all the laws of physics.1 What has happened (it seems to me) is that even the latest generation of climate models necessarily make some heroic simplifying assumptions, in order to render the model tractable. Lindzen isn’t accusing the modelers of being lazy. Even so, he maintains that their models are still crude and give very misleading results. The connection between the work of Paul Samuelson and, say, Israel Kirzner should be obvious.

What is particularly worrisome to me is that the case for anthropogenic global warming runs basically like this (and see the Intergovernmental Panel on Climate Change Fourth Assessment Report [IPCC AR4] section here [pdf] in their own words, especially Frequently Asked Question 9.2 on page 702): When the modelers simulate the 20th century, they achieve a closer fit to the historical trends if they assume large, positive feedback effects from human greenhouse gas emissions. If the modelers adjust the dials (so to speak) and turn down the possible influence of human emissions, then, so long as we insist the models obey the laws of physics, the fit between the simulated temperatures and observed temperatures gets worse.

To an economist, at first this sounds as if they are simply running regressions, and they get a higher R2 by bumping up the coefficient on the greenhouse gas term. But the climate models are more sophisticated than a naïve regression. They really are simulations, trying to boil down the entire earth (including a crude version of the oceans) into a form that a computer can handle. Even so, there are still several components with uncertain effects (such as whether aerosols released into the atmosphere by human activity on net caused the globe to be warmer or cooler in the 1970s), and thus the climate modelers have some freedom in their approaches; they are not completely constrained by the laws of physics, since they have to cut some corners in order for the computer to be able to process the model.

Indeed, this is why there is such a wide range of plausible estimates for how much global temperatures will increase in the long run, due to a doubling of carbon dioxide concentrations in the atmosphere. The scientists are agreed that if these concentrations double (relative to preindustrial times, with a benchmark year of 1750), global mean surface temperatures will rise around 1.2°C just from the direct operation of the enhanced greenhouse effect. Yet the IPCC AR4 assessment says that a doubling will lead to an “equilibrium” (i.e., long run) temperature increase that is “likely” in the range of 2 to 4.5°C, with a best guess of 3°C. The range of estimates is significantly higher than the direct effect, because it is assumed that temperature rises themselves will set into motion further warming. For example, as the earth warms due to greenhouse gas emissions, the atmosphere will hold more water vapor, which in turn will enhance the greenhouse effect.

All of the truly nightmarish scenarios obviously involve high estimates for “climate sensitivity,” i.e., how much the globe warms not only because of the direct effect of greenhouse gases, but also because of the “positive feedbacks” this warming itself induces. It is because of these scenarios that Joseph Romm warned in a recent Cato debate that unless immediate action is taken to stabilize CO2 concentrations at 450 ppm, we invite the destruction of modern civilization. (Really, he said that; see Jim Manzi’s more sober view.)

Now even though the skeptics cannot give a model of the entire world’s climate system of their own that performs the simulations in the IPCC’s models better than the ones favored by the IPCC, it is entirely appropriate to question how accurate the current batch of models really are. From the preindustrial benchmark date until 2005, atmospheric CO2 concentrations increased some 35 percent (from about 280 ppm to 379 ppm), while temperatures only increased around 0.7°C. If this were the end of the story — i.e., if the increase in CO2 were the only relevant change, and if global temperatures stopped increasing after this 0.7 degree increase — then the actual climate sensitivity would be below the lower end of the IPCC’s range.

Now in fairness to the IPCC modelers, the above observation by itself is irrelevant. Other things happened between 1750 and 2005 besides human emissions of CO2, and even if we stopped emitting greenhouse gases tomorrow, it is possible that the world would continue warming, demonstrating a higher positive feedback. Still, even on its own terms, almost half of the “consensus” view of the earth’s sensitivity to greenhouse gases has yet to be observed. The IPCC AR4 reports a best guess of +1.6 Watts per square meter as the total radiative forcing (which is what drives the climate) from all changes, both anthropogenic and natural (solar activity and aerosols from volcanic eruptions), from preindustrial times through the year 2005. Again, this estimated total forcing went hand in hand with an observed temperature increase of 0.7°C. Now a hypothetical doubling of CO2, holding all other forcing mechanisms constant at preindustrial values, would yield a forcing of +3.7 Watts per square meter. Because the relationship between radiative forcing and temperature increases is considered linear, these numbers mean that the observed “transient” (medium run) climate sensitivity thus far is 1.6°C, a little over half of the official best guess of 3°C.

Let me be clear: I am not saying that the climate models are demonstrably wrong. I have found no reason to doubt that the IPCC models really are the state of the art, or that the official ranges really are the “best guess” of the globe’s sensitivity — if you are going to calculate those numbers using the methods currently in vogue. To go back to economics, it is not as if Paul Samuelson made a mistake in his arithmetic and that’s why the United States experienced stagflation. If you had to calibrate a Keynesian macro model, to find out just how much output would increase from a given injection of government spending, you couldn’t do better than picking Paul Samuelson to run those regressions. But if you suspected that perhaps the very approach of the model was flawed, you wouldn’t expect Paul Samuelson — the very person responsible for the mainstream approach — to engineer a paradigm shift.

To return to the climate models: If you want to use a computer to simulate the globe’s climate over hundreds of years, and you want it to incorporate all of the factors that other models currently incorporate, you will be hard put to show that humans have had a negligible effect on the climate. However, using this very approach, you will end up concluding that there are large positive feedback effects from human greenhouse gas emissions that so far have not been observed. It’s true: the models are consistent with the historical data. They assume that the large positive feedbacks have been partially masked thus far by offsetting factors (such as aerosols in the 1970s reflecting sunlight back into space), and that not enough time has passed for the positive feedbacks to fully kick in.

What I want to stress is that the alarmist scenarios are not even just naïve extrapolations of existing trends; on the contrary, they rely on large amplifications of existing trends. If global temperatures respond to human emissions in the 21st century the way they (apparently) did in the 20th, there will be no cause for alarm. It is only by assuming that there is disaster “in the pipeline” that has not yet manifested itself, that one can make a case for massive restrictions on carbon use.

Conclusion

I hoped to show in this article the similarity between the cutting edge macroeconomic models of the 1950s and 1960s, and the climate models touted by the IPCC. In both cases, really smart guys (and now gals too) built impressive models that were quite rigorous in some respects, yet woefully deficient in others. In the case of economics, this hubris led to horrible government policies. We can only hope the same doesn’t happen because of the climate models. It is true that science must start somewhere, and a bad model is better than no model. But this truism does not mean governments should expand their powers whenever the “best guess” says it would be a good idea.2 A wise student of history should also be able to say, “We really don’t understand this very well yet.”

  • 1I am grateful to Robert Bradley for this point. An excellent illustration of this contrast between micro and macro in climate models is a new regional study predicting an increase in Middle East rainfall of 50%, in contrast to the global models which only a month earlier had predicted severe water shortages in the region.
  • 2Even if the IPCC consensus were true, it still wouldn’t follow that governments would “fix things.” But that is a different story.
All Rights Reserved ©
What is the Mises Institute?

The Mises Institute is a non-profit organization that exists to promote teaching and research in the Austrian School of economics, individual freedom, honest history, and international peace, in the tradition of Ludwig von Mises and Murray N. Rothbard. 

Non-political, non-partisan, and non-PC, we advocate a radical shift in the intellectual climate, away from statism and toward a private property order. We believe that our foundational ideas are of permanent value, and oppose all efforts at compromise, sellout, and amalgamation of these ideas with fashionable political, cultural, and social doctrines inimical to their spirit.

Become a Member
Mises Institute