Mises Daily

The Trouble With Axelrod: The Prisoners’ Utility Cannot be Measured or Compared

The sub-discipline of international relations in the twentieth century has been dominated by the realist idea that cooperation between state actors is extremely unlikely due to the anarchic nature of the international system. According to the traditional realist logic, the international system lacks an independent enforcing agency capable of coercing states should they attempt to defect on their cooperative agreements with other international actors, and as a consequence, states are hesitant if not completely unwilling to cooperate with other states.

The realist expectation that cooperation is extremely unlikely in situations lacking an independent enforcement agency has been forcefully challenged by Robert Axelrod in his The Evolution of Cooperation.[1] Utilizing the formal theoretical modeling of the rational choice school, Axelrod offers a strong empirical case for the possibility of cooperation between actors — even in situations lacking an independent enforcement agency.

While Axelrod rightly concludes that cooperation is possible and even probable in situations lacking an independent enforcement agency, his method is not suitable for defending this conclusion.

Axelrod’s defense of the possibility of cooperation between two actors in the absence of an independent enforcement agency is based upon several computer tournaments he sponsored at the University of Michigan to investigate the best possible solution to the iterated Prisoner’s Dilemma. Contestants were asked to submit strategies that would result in the best overall performance when paired against one another in a round-robin tournament of the Prisoner’s Dilemma.

The strategy that emerged victorious from these round-robin tournaments was the simple “tit-for-tat” strategy in which the player opens with cooperation in the first round, then responds in kind to every move of his opponent in subsequent rounds. Interestingly, the strategies based primarily on taking advantage of one’s opponent, (what Axelrod calls “mean” strategies), fared the worst in the tournaments. In contrast, the strategies that fared best were the ones that were most forgiving of the defections of the opponent, and were the most open to cooperation (that is, they were what Axelrod calls “nice” strategies). Both of these characteristics are embodied best in the “tit-for-tat” strategy.

The results of the tournaments are interpreted by Axelrod to have broad theoretical implications and applications. In the first place, and contrary to the expectations of most of the program designers, the fact that strategies that sought to cooperate with the opponent fared much better than those that sought to take advantage of the opponent is interpreted by Axelrod to indicate that the traditional understanding of the operation of the Prisoner’s Dilemma is mistaken. For the Prisoner’s Dilemma has traditionally been understood as a game that will almost always leads to defections on the part of both players.

The empirical results of Axelrod’s tournament, however, indicate that cooperation is not only possible between two players in a Prisoner’s Dilemma, it is the best possible strategy for both players to adopt if they wish to maximize their absolute gains over multiple iterations of the game.

Additionally, Axelrod argues that cooperative strategies like tit-for-tat can “invade” areas that are dominated by “mean” strategies, because just a few players employing a cooperative strategy can benefit from each other enough to allow the strategy to spread throughout the “mean” area over time. This is precisely the evolutionary aspect of Axelrod’s argument; cooperative strategies, (because they are more profitable to both players than strategies based upon defection), come to be adopted by more people over time as a result of the gains they offer to the players who adopt them.

The problem with Axelrod’s argument is the oft-discussed problem of interpersonal utility comparison. Axelrod’s argument, (and all game theoretic modeling, welfare economics, and utilitarian moral philosophy, in fact), would require that it be possible for one to measure and compare the utilities of different people on the same scale of measurement.

The problem with this assumption is that it is quite impossible to construct a scale of measurement for human preferences — both for individuals and especially for groups of individuals.[2] In order for this to be possible it would be necessary for there to exist a constant unit of utility for each individual — the impossibility of which can be demonstrated simply by asking ourselves the following question: What is the constant unit of “utility” that separates my preference for chewing tobacco over movie popcorn?

There is no doubt that Axelrod is aware of this problem, and he addresses it specifically: “The payoffs of a player do not have to be measured on an absolute scale. They need only be measured relative to each other.”[3] But, how would it be possible to measure the utilities of two different people relative to each other without a constant unit of measurement for each individual, i.e., an absolute scale for each individual? Indeed, without a constant unit of measurement for each individual, the two utility scales are completely incommensurable.

You cannot, for example, compare my idea of beauty with President Lincoln’s idea of beauty, when there exist no known (or even potentially knowable) intervals (units) of beauty for me as an individual. Axelrod’s disclaimer notwithstanding, he reveals in the footnote to the above statement that he is indeed assuming an absolute scale of utility measurement. He states, in fact, that:

[T]he utilities need only be measured as an interval scale. Using an interval scale means that the representation of the payoffs may be altered with any positive linear transformation and still be the same, just as temperature is equivalent whether measured in Fahrenheit or Centigrade.[4]

What I am claiming is that is quite impossible to achieve a linear transformation of two “variables” to which it is impossible to assign numerical values. You can do this with temperatures measured in millimeters of mercury solely because we assume that we can assign a constant unit of measurement — but we cannot do this with “utilities.” You can, of course, put arbitrary numbers on anything, but the very idea of using an interval scale assumes that there exist known (or at least potentially knowable) intervals between each unit of measurement — an assumption that is simply untenable in the realm of what is infelicitously known as man’s “utilities.”

If we cannot construct an interval or continuous measure of utility for individuals or groups, then (and this is vital) it is equally impossible to know when a Prisoner’s Dilemma situation exists in the world. We might be able to hypothesize, in a purely formal manner, that it might be possible for two men’s preferences to approximate the Prisoner’s Dilemma at a specific point in time. But, in the light of the impossibility of ever measuring or comparing utilities, we would never be able to say with any certainty that any given situation was a Prisoner’s Dilemma — and, in fact, we cannot say with any certainty that a Prisoner’s Dilemma has ever existed.

Since this is the case, it is surely presumptuous of Axelrod to prescribe for the reader all sorts of ways to mitigate the cooperative problems associated with the Prisoner’s Dilemma — when neither he, nor anyone else, knows whether there are such dilemmas.

The foregoing discussion might give the impression that it is impossible to determine anything at all about man’s preferences and utilities. In actuality, however, we can determine man’s preferences with absolute certainty. Following Rothbard, we can analyze man’s preferences through his actions (or, to use Rothbard’s phrase, his “demonstrated preferences”).[5]

If a man voluntarily acts in a certain way, we can say with absolute certainty that he preferred that course of action to any other option. Another way of saying this is to say that all voluntary human action is an attempt to make the actor subjectively, and ex ante, better off than he otherwise would have been.

We do not have to hypothesize about man’s preferences, or attempt to place numerical values on them — all we have to do is observe how he voluntarily does act. It is high time that we exorcise the unjustifiable comparison of “utilities” from political science and economics.

Bibliography

Axelrod, Robert, The Evolution of Cooperation. New York: Basic Books, 1984.

Croce, Bendetto, “On the Economic Principle, Parts I and 2.” International Economic Papers 3 (1953): 175-76.

Rothbard, Murray Newton, “Toward a Reconstruction of Utility and Welfare Economics.” In The Logic of Action One: Method, Money, and the Austrian School, 211-55. London: Edward Elgar, 1997.

Notes

[1] New York: Basic Books, 1984.

[2] For the most compelling statement of this argument, see Murray Newton Rothbard, “Toward a Reconstruction of Utility and Welfare Economics,” in The Logic of Action One: Method, Money, and the Austrian School (London: Edward Elgar, 1997). See also, Bendetto Croce, “On the Economic Principle, Parts I and 2,” International Economic Papers 3 (1953).

[3] Axelrod, p. 17.

[4] Ibid., p. 216n. Emphasis mine.

[5] Rothbard, op. cit.

All Rights Reserved ©
What is the Mises Institute?

The Mises Institute is a non-profit organization that exists to promote teaching and research in the Austrian School of economics, individual freedom, honest history, and international peace, in the tradition of Ludwig von Mises and Murray N. Rothbard. 

Non-political, non-partisan, and non-PC, we advocate a radical shift in the intellectual climate, away from statism and toward a private property order. We believe that our foundational ideas are of permanent value, and oppose all efforts at compromise, sellout, and amalgamation of these ideas with fashionable political, cultural, and social doctrines inimical to their spirit.

Become a Member
Mises Institute