Behavioral economists say that people behave irrationally because they miscalculate the probability. But maybe it is not a problem with people, but with the use of the probability theory in a non-ergodic environment full of uncertainty?1

Daniel Kahneman, an economic Nobel Memorial Prize laureate, in his book * Thinking, Fast and Slow * puts forward the thesis that people often overestimate low probability. As an example, he gives the risk of suicide bombings on buses in Israel in 2001-2004. Although the risk of being a victim of bombers for a single passenger was small, people avoided buses as much as possible, which according to Kahneman was irrational and resulted not from sensible concern for survival, but from the availability heuristic .

The September 11 terrorist attacks in the United States are also often referred to in this context. After the attacks, Americans preferred to travel by land for some time rather than by air, even though traveling by car is, statistically, more dangerous. Such substitution was therefore irrational and contributed to unnecessary deaths as a result of the increased number of road accidents.

However, there is a problem with the examples above — the concept of risk does not necessarily apply to them. Already in 1921, the American economist Frank Knight distinguished risk from uncertainty. This first concept applies to events whose probability we can estimate, such as a specific result of the dice roll. The latter refers to activities whose probability is not known. The uncertainty applies to the vast majority of events in business and in general in everyday life. And for terrorist attacks. How were people to assess the danger of the next terrorist attack and determine the risk of traveling by bus in Israel or airplanes in the United States in a world of new threats? They couldn’t! The 9/11 attacks were clearly an usual, peculiar (and tragic) event.

**Life Isn’t a Casino**

Kahneman and other behavioral economists confuse risk with uncertainty or the class probability with the case probability. But the class probability does not apply here — flying into the twin towers of the World Trade Center was a unique event. Past statistics say nothing about future threats that are fundamentally uncertain. After the 9/11 attack, the Americans could reasonably assume that their world had changed and instead of inserting new information into the old algorithm, they simply discarded the algorithm saying that planes are safer than cars. The world in which planes are hijacked and flown into skyscrapers is qualitatively different from the world in which planes are not hijacked.

The real problem is not that we can’t calculate the probability of some states of the world correctly, but that we don’t know how the world works. Probability applies in a casino, but not in real life where there are many unknown unknowns. There are significant differences between a roulette game or weather forecast and the scope of new inventions, the prospect of war, or the outlook for asset prices. As Keynes wrote in 1937 (amazing, I agree with Keynes), “About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.”

The thesis that people overestimate the very low risk of terrorist attacks is therefore nonsense, because the concept of risk does not apply here at all, it cannot be estimated. It is therefore difficult to claim that people behaved irrationally, opting for a form of transport that was more under their control and minimized a new, undefined threat, even if it could ultimately prove more dangerous.

**Rationality In Non-Ergodic World**

According to the expected utility hypothesis, rational decisions are made based on the expected value. Daniel Kahneman and Amos Tversky rejected this approach, formulating instead the prospect theory, according to which people are more afraid of losses than value profits, and they prefer certain profit, even if it is lower than the expected payout in the alternative scenario. For example, people prefer to get $46 for sure than to take part in a gamble where they get $100 when head lands and nothing when tail appears. And yet the expected value of such a bet is $50, which is more! Studies show that people also do not agree to accept a bet where there is a 50 percent probability of losing $100 and a 50 percent probability of winning $200, even though the expected payout is $50! Is this not another example of the irrationality of human choices?

Not necessarily. The problem is that both neoclassical and behavioral economists incorrectly assume that the world is ergodic, i.e. one in which ensemble average equals time average. In reality, however, the world is non-ergodic, and the time average, i.e. the average of given outcomes over time differs from the average for the population at a given time.

What does it mean? Let’s consider the second example again. If it were possible to throw a coin a million times, then of course such a bet would be extremely beneficial for us. Or if a million people threw a coin, the average payout in such an unusual lottery would be around $50. However, from the point of view of a single person, the risk of a non-trivial loss in this bet is considerable, because after one wrong toss, a person could lose $100, and after a bad series even fall into financial trouble. The payday trajectory for one person is something completely different than the average payoff from all parallel universes.

Let’s consider another example. Just because a certain percentage of black people commit crimes in the US, it does not mean that every black person commits crimes at the same rate each month as the crime rate among the whole population. Hence, the aggregate statistics should not be used to conclude what a particular person is likely to do.

Similarly, the fact that a given percentage of people is at a certain moment in the lowest income decile does not mean that these people spend the same percentage of their lives or careers in that decile. Actually, dynamic inequality is lower than static inequality, which is worth remembering in all discussions about the income inequality.

Ole Peters proposes another example to illustrate that the sequence of events over time matters. Let’s assume that the price of a certain share is $100. If the price increases and decreases randomly by 50 percent, then the ensemble average would reach $100 after two coin flips (in a positive scenario it increases to $150, while in a negative scenario it decreases to $50, which gives the expected value of PLN 100). However, the time average would be $75 (as, for example, the stock price goes initially upward by 50 percent to $150 and then falls 50 percent to $75) . In fact, the shareholders’ capital would not remain unchanged, but it would quickly diminish after a few tosses.

Nassim Taleb gave an even better, though more extreme, example. In the Russian roulette game with a stake of $1.2 million, the expected payout is $1.0 million (5/6 times $1.2). But anyone who continues to play long enough will eventually end up in the cemetery (similarly, the casino always win at the end, even if at each particular moment some players manage to win something). A refusal to participate in such a game does not prove irrationality or even loss aversion, but is a manifestation of common sense and an elementary desire to survive.

**Being Rational Isn’t The Same As Being Reasonable**

Neoclassical economists consider people rational only if they adhere to certain axioms or models of researchers. Because human action rarely coincides with the model, behavioral economists have brilliantly “discovered” irrational behavior amongst people. Unfortunately, their definition of rationality coincides with the traditional optimizing model. As Mervyn King points out in * The End of Alchemy*, “the problem with behavioural economics is that it does not confront the deep question of what it means to be rational when the assumptions of the traditional optimising model fail to hold.”

And this is the key to the puzzle of rationality. In an ergodic world where risk can be objectively measured, knowledge of probability theory is enough. However, in a non-ergodic and uncertain world, the probability theory and expected utility hypothesis are not enough. In such a reality, rationality is not identical to the rules of formal logic and calculus of probability. As the German psychologist Gerd Gigerenzer argues, in such a complex world, heuristics do not necessarily lead to cognitive errors, but are rational adaptive tools. (Another issue is that cognitive biases found by Kahneman often disappear when given survey questions are reformulated to be in line with not Bayesian, but with frequentist interpretation of probability.)

People do not act as *homo economicus* who calculates convoluted probabilities to optimize his decisions. But the problem here is not human behavior, but the wrong definition of rational behavior as consistent with deduction and the probability theory. People do not function with “zero intelligence.” (Even if people did make decisions based on “zero intelligence,” markets can still work effectively, as demonstrated by the famous experiment of Gode and Sunder; I wrote more about the distinction between individual and systemic rationality in my previous article.)

Simply put, in an uncertain world, it is not entirely clear what it means to act rationally. In a world full of uncertainty, one cannot calculate the expected utility, and there is no such thing as optimizing behavior, because the odds and payments are not known *a priori*. In an uncertain, complex and changing world, people reach for induction, for fast and economical heuristics. Is this *irrational*? Maybe. But certainly *reasonable* given the actual conditions. Or maybe even rational, if we consider rational what allows for survival.

[*This article was translated from an original Polish version, found here.*]

- 1[Editor’s Note: For a definition of “ergodic,” Packard, et al quote Paul Davidson: ““By definition, an ergodic stochastic process simply means that averages calculated from past observations cannot be persistently different from the time average offuture outcomes.” Packard et al add: “The world is not ergodic; human action and innovation constantly shift the basic structures within which deci-sions and actions are made.” See Uncertainty Types and Transitions in the Entrepreneurial Process” by Mark D. Packard, Brent B. Clark, and Peter G. Klein. (https://mises-media.s3.amazonaws.com/The%20Types%20of%20Uncertainty%20Entrepreneurs%20Face.pdf)