Mises Daily Articles
The Myth of the Model
We live in an age where abstract models of the real world are held in high regard. Wall Street firms hire mathematicians and physicists to create sophisticated mathematical models of various assets and markets. Meteorologists employ computer models to predict the path of storms. Marketing firms model the anticipated consumer response to a proposed ad campaign. Bridges are built, planes are flown, giant buildings are raised, and crops are planted with the aid of abstract systems of equations. Military strategists use models to simulate the course of battles and wars under various scenarios; indeed, the Iraq War was war-gamed long before the fighting began.
The current esteem in which such models are held is not baseless. Since the time of the scientific revolution of the sixteenth and seventeenth centuries the use of mathematical models has greatly increased mankind's mastery of the physical world. While their success in describing the activities of people themselves has not been as notable, some have asserted that failure is only due to the relative youth of the social sciences and the complexity of their subject matter. Given time, so the argument goes, individual behavior and social phenomena will be modeled just as successfully as the physical world is today.
But, if one is absorbed in the fascinating project of seeking and perfecting such abstractions, it is easy to forget that even the most useful, most sophisticated models are only skeletal images of some full experience. Watching a simulation of a hurricane on a computer screen is a far cry from actually being in the midst of one. The chaos that ensues once a real battle is underway is never captured in a model of the conflict. A mathematical description of the atmospheric refraction of light at sunset does not convey the power of the setting sun as a metaphor for old age and death, the wistful nature of a winter sunset on a lonely moor, or the romantic mood created by watching the sun sink into the sea from the shore of a desert isle.
Correctly interpreting the relationship between a model and the complete experiences from which it was abstracted is a matter of skilled judgment. A model cannot interpret itself; it asserts that, if certain aspects of a particular situation closely conform to specifications contained in the model, then we can expect certain other circumstances to arise, either with full certainty or with some measure of probability. It cannot mechanically spit out an answer as to how well it applies to some real event; that determination requires skilled, experienced judgment.
We have arrived at a crucial fact about the application and misapplication of models: That someone is highly skilled at developing and manipulating the abstractions that make up a model does not necessarily mean that he is also adept at interpreting how that model relates to reality. A person who is extremely good at creating models of financial instruments may be awful at trading securities based on his models, which is why investment banks employ traders to use the software created by the modelers. Skill at simulating battle scenarios does not equate to the ability to adjust one's strategy to the ever-evolving conditions on the field, which is why modern armies have battle-experienced officers in charge of their troops during a real conflict.
Austrians are acutely aware of the gap between skill at modeling and the ability to interpret how models relate to the real world when it comes to economics. They have often noted that mainstream economists typically develop highly simplified models of some economic process, and then proceed to criticize the real economy because it does not fit the model.
The theory of perfect competition is a salient example. It depicts an impossible state of affairs, a "market" in which neither the buyers nor the sellers of a good can affect its price. Instead, they are "price takers," accepting as unalterable a price that is the outcome of mathematical equations rather than human action. When supply or demand conditions change, the model itself spits out a new price. As Robert Murphy cogently put it, in the world of perfect competition, when the demand for a good drops, all of the sellers suddenly discover, to their surprise, that they are now offering the good at a lower price!
The perfect competition model is not even internally coherent, as pointed out by mainstream economist G.B. Richardson (see Rizzo and O'Driscoll, 1996 , pp. 90–91). Since all market participants in the model have identical knowledge of economic conditions, in response to a higher-than-normal return in some good's market, all of them will be motivated to become suppliers of the good. However, their mass entry would mean that suppliers of the good would receive a below-normal return. Since they all can anticipate the potential flood of new entrants, they will all be equally discouraged from becoming suppliers of the good. But that leaves the return to the current suppliers above normal, spurring everyone to jump into the market, which would reduce the return to below normal, prompting everyone to stay on the sidelines . . . well, you can see where this is going. Within the model of perfect competition there is no escape from this circular conundrum, except, perhaps, through some artificial and unrealistic mechanism that assigned the right to entry randomly among all possible producers.
Nevertheless, the model accurately depicts a particular, limiting case implicit in the general analysis of supply and demand. As long as one keeps in mind that it is an unrealistic abstraction, isolating just one aspect of the actual market process, it may have its uses. However, to pass judgment on existing markets based on how closely they approach a state of perfect competition is to egregiously confuse the map and the territory.
I recently ran across a similar example in a textbook on international economics. Discussing the relationship of a nation's exchange rates and its balance of payments, the author says, "under a flexible exchange rate system, a balance-of-payments disequilibrium is immediately corrected by an automatic change in the exchange rates . . . " (Salvatore, 2004, p. 512, emphasis mine).
Now, if a disequilibrium is "immediately corrected," in what sense did it ever exist? Wouldn't a market have to be out of equilibrium for at least some period of time, however brief, before we could say that there was disequilibrium? Salvatore's problem is that in the model he is using exchange rates are always in equilibrium. He recognizes that real world rates must be out of equilibrium at times—otherwise why would any rational investor ever enter the foreign exchange market?—but his model only deals with states of equilibrium. Therefore, he is forced to posit disequilibrium that vanishes simultaneously with its appearance.
And how would exchange rates change "automatically?" Is there a god of foreign exchange, attuned to the equations in textbooks on international economics, acting to enforce those formulas? In reality, isn't it when traders in the foreign exchange market believe that an existing exchange rate is, in some sense, "wrong," that they make trades resulting in an alteration of the rate? In markets as liquid as the dollar-yen or dollar-euro, we might expect that such adjustments will occur very rapidly, so that reality will not be too different from a model in which they are instantaneous. Still, that does not make them "automatic."
But Austrian economists, as well as anyone else who is interested in the relationship between scientific models and the real world, should be aware that it is not only economists who are sometimes seduced by their models. Mistaking an abstraction for the experience from which it was abstracted is a common error, of which I will offer a few examples.
Eli Maor writes popular books on mathematics, as well as teaching the history of mathematics at Loyola University in Chicago. His book, e: The Story of a Number, is a lively account of the discovery and exploration of the important mathematical number e, and is well worth reading if you have an interest in such things. (e, which is approximately equal to 2.718281828, is the base of natural logarithms, and has many other notable properties, such as the fact that y = ex is the only function that is its own derivative.) But the book contains several examples of mistaking a model for reality.
For example, Maor (p. 103) notes that the rate at which a radioactive substance decays is modeled by the equation m = m0e-at —the mass remaining at time t is equal to the initial mass, m0, times the number e raised to the negative power a times t. In the equation, although e-at becomes smaller and smaller as time passes, it never reaches zero. Therefore, Maor concludes, "the substance will never completely disintegrate."
It is true that if we start with a couple of million plutonium atoms, the above model of radioactive decay implies that, even after a billion years, there will still be some plutonium left, roughly on the order of a millionth of an atom. But the idea of a millionth of an atom of plutonium is nonsensical—plutonium means a whole atom with a certain number of protons, so we either have at least one atom of plutonium or we don't have any plutonium at all. At some point, whatever result the equation describing radioactive decay comes up with, the very last atom of some initial pile of a radioactive substance will decay, leaving exactly none of m0 behind. In other words, it eventually will "completely disintegrate."
In passing, I note that models like the above should caution Austrians about criticizing neoclassical economics for employing calculus to model human choice. Calculus deals with functions whose values vary continuously over some range. That means that they cannot have any lower limit below which minutely different values are considered to be the same. For example, a continuous function cannot output prices only to the nearest penny, but instead to any fraction of a cent whatsoever.
Of course, in economics we never find humans choosing between arbitrarily small differences in the quantity of some good—grocery stores do not sell 1.0034892358623565128787 pounds of apples at a different price than 1.0034892358623565128786 pounds of apples, nor do consumers take into account such minute differences in making purchases. That fact alone might seem to forbid using calculus in economics.
However, physics, chemistry, and biology frequently use calculus to model phenomena that clearly do not involve quantities that can alter continuously. For instance, in studies of animal or plant populations, calculus may be used to describe the rate at which the population changes. Of course, a population of living creatures can only change by integral amounts—a population can never drop by 1/16th of a wolf or increase by a third of a maple tree. Nevertheless, as long as the modeler remains aware that his model differs in that respect from the real phenomenon he is examining, his employment of calculus is not only acceptable but may be crucial to revealing important aspects of his subject. While many Austrian critiques of the neoclassical approach are on target, I do not think that complaints about using calculus to model discrete choices hit their mark.
But let us return to our main topic, and look at a scientific study that has garnered a great deal of public attention, due to the fact that it supposedly shows a widely observed phenomenon is merely an illusion. Famed psychologist Amos Tversky and two of his colleagues performed a study (Gilovich, Vallone, and Tversky, 1985) in which they claimed to demonstrate that the popular notion of a basketball player having the "hot hand," meaning that he is shooting particularly well for a time, is a delusion springing from ignorance of statistics. The authors examined extensive sequences of shots by players on the Philadelphia 76ers, looking for evidence of hot streaks. The Boston Globe described their argument as follows:
"The spectacle that basketball fans profess to see, Tversky argued, is nothing more than the standard laws of chance, observed through the imperfect lens of human cognition. Specifically, he noted, people have a tendency to expect the overall odds of a chance process (say, the 50 percent distribution of heads on a flipped coin, or the 46 percent accuracy of [76er-guard] Toney's field-goal shooting) to apply to each and every segment of the process. For instance, when flipping a coin 20 times, it's not uncommon to see a string of four heads in a row. Yet when people are paying attention to a shorter sequence of the 20 coin flips, they are inclined to regard a string of four heads as nonrandom—as a hot streak—even though a strict back-and-forth of heads and tails throughout the 20 flips would be far less likely." (Ryerson, 2002)
Now, a toss using an honest coin is already known to be a random event, with even odds of producing heads or tails. Therefore, we logically conclude that runs of heads or tails are due to chance. But the mere fact that some data occurs in a pattern consistent with pure chance does not mean that we necessarily are viewing a random process. Consider a statistician, sitting down to construct a table for his new textbook, which will show a normal distribution of heads and tails over a series of one-hundred coin tosses. If we follow Tversky's logic, when we study the table we will conclude that the entries appeared there as the result of pure chance! Of course, they appear just like the outcome of a random process, because they were deliberately designed to appear so and not because their presence in the table is a chance event.
One reader contended that, in the above paragraph, I have misconstrued what it means to call some happening random. When a scientist refers to some process as 'random,' he contended, it only indicates that a statistical analysis of the events in question reveals a particular type of pattern. Nothing is implied about the nature of the cause that brought the events about.
Now, it is certainly possible and logically consistent to define 'random' that way. But I believe that such a definition is at odds with the common usage of 'random,' and unnecessarily so, since it lacks scientific justification. Consider Columbia University sociologist Duncan J. Watts describing the approach he and two colleagues adopted for modeling networks of scientific collaboration:
"[W]e assumed that the matching between actors and groups occurred in a more or less random fashion. Clearly this isn't the case in the real world, where decisions about which groups to join are generally planned and often quite strategic. But as we had done so often in our models before, we hoped that the decisions of individual actors were sufficiently complicated and unpredictable that it would not be possible to distinguish them from simple randomness." (Watts, 2003, p. 127)
In the above passage, we find a prominent scientist, one who is intimately familiar with contemporary statistical practice, asserting that a phenomenon can "clearly" not be random, yet still be statistically indistinguishable from other phenomena that are. I believe Watts is making a sensible and important distinction between events due to pure chance and events brought about by intelligence or skill, which nevertheless occur in the same sort of statistical pattern that arises from truly random processes.
Returning to the Tversky study, it seems plausible that if hot streaks in shooting a basketball are merely an illusion, then cold streaks are probably an illusion as well. So let's imagine a basketball player who has the flu the day of a game. Furthermore, imagine that just an hour before tip-off that his mother has just died. If he shoots 2 for 18 that night, we would be mistaken to attribute his poor performance to the above-mentioned factors, since a certain number of very poor shooting nights will occur by chance, just as over 18 coin flips we will occasionally see only two heads.
As I see it, the crucial fact that Tversky and his colleagues overlooked in their analysis is that whether or not a basketball player makes any particular shot is not a random event. If he executes perfectly, then the ball will go in the basket. If you have played much basketball, then you know that you can often detect a flaw in your technique even as the ball leaves your hand, realizing that your shot won't go in well before it nears the hoop. That alone does not demonstrate that making shots is not a chance phenomenon—after all, you might be able to detect your errors but still have no control over them. However, the fact is that you can correct the problem on your next shot by focusing on the proper execution of the movement you botched previously. And increased focus is the primary experience reported by players during the time when they had the "hot hand."
If making shots really were a random process, then we might even wonder whether Michael Jordan's success at basketball was due to a long run of good luck, while my failure to make my middle school team was just bad luck. Perhaps if Michael and I both continued playing for a long enough period of time, we would be equally successful.
Debunkers of the "hot hand" might protest that I have misrepresented their case. They have never claimed, they might say, that there is no skill involved in shooting, nor that different people do not possess that skill to different degrees. They only assert that whether any one player, during some stretch, makes a greater or lesser percentage of shots than his average is a matter of chance. But if different players have varying levels of skill at shooting, then why wouldn't any particular player's skill also differ over time, since he is never exactly the same person today as he was yesterday? Why would we believe that today's Joe Smith could only shoot better than yesterday's Joe Smith by chance? Won't Joe Smith be a better or worse shooter on one day than on another?
In fact, shooters with the "hot hand" often report that they experienced the game in a different way during their streak. They were "in the zone," they saw the basket more clearly, they found their concentration completely unaffected by the crowd or by opposing players. If we can believe that a player's poor shooting one day can be traced to events that left him quite distracted, why should we not also acknowledge that a relative absence of distractions can lead to a good shooting day?
The results of Tversky's study are perfectly consistent with the possibility that hot hands are a real phenomenon, in other words, that players really do enter into a "zone" where they are performing at a higher level, but that neither the player, nor his coach, nor a statistician can predict how long he will stay on that plateau. The "hot hand" is real, but comes and goes at random, so that the fact that a player had the "hot hand" for five shots would have no bearing on whether he will make the sixth.
In fact, I believe that is what actually is going on. If so, then finding a player's hot and cold streaks randomly distributed around his average shooting percentage is hardly surprising, since that is precisely why his average is what it is.
So the misuse of abstract models is hardly unique to economics. The success of a model, along with the certainty of result it offers, can tempt its users to conflate it with reality. But the world is never ensnared in even the best of our nets. As Shakespeare said, "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."
Gilovich, T., R. Vallone, and A. Tversky. 1985. "The Hot Hand in Basketball: On the Misperception of Random Sequences. Cognitive Psychology, 17, pp. 295–314.
Maor, Eli. 1994. e: The Story of a Number. Princeton, New Jersey: Princeton University Press.
O'Driscoll, Gerald P. and Mario Rizzo. (1996) . The Economics of Time and Ignorance London and New York: Routledge.
Ryerson, James. 2002. "The Man Who Wasn't There." The Boston Globe October 20, 2002 p. D1.
Salvatore, Dominick. 2004. International Economics Hoboken, New Jersey: John Wiley & Sons.
Watts, Duncan J. 2003. Six Degrees: The Science of a Connected Age New York: W.W. Norton & Company.