At the US-Saudi Investment Forum on November 19, 2025, entrepreneurial titan Elon Musk shared with his audience his vision of the future shaped by advances in artificial intelligence (AI) and robotics. Among other things, Musk said: “And my guess is, if you go out long enough—assuming there’s a continued improvement in AI and robotics, which seems likely—money will stop being relevant.”
A future in which money no longer plays a role? Is that really possible, or at least probable? To answer these questions, let us first recall why people have demanded money for thousands of years.
Standard economics textbooks list three motives for holding money: the medium-of-exchange function, the unit-of-account function, and the store-of-value function. Yet there is one deeper reason that precedes and ultimately determines all these motives: uncertainty (or unknowability) in human action.
If everything were perfectly predictable, people would indeed have no need for money. Everyone would already know today all their future goals, needs, available resources, prices, etc. In such a world, we could arrange everything today so that our available goods supply in the future could be perfectly aligned to our future demand for goods.
But because the future is uncertain, humans are unable to know today what they will need or want tomorrow. Instead, they must already prepare in the present for future changes they cannot yet foresee or fully assess. And this—uncertainty about the future—is precisely why people demand money. Ludwig von Mises (1881–1973) put it this way (p. 377): “Only because there is change, and because the nature and extent of change are uncertain, must the individual hold cash.”
Holding money gives people the ability to cope with future uncertainty. It makes them (more) capable of exchange and allows them to react to changed circumstances in the way that is best for them. Of course, one could also prepare for uncertain future events by holding “ordinary goods” (food, clothing, etc.). But holding money is particularly simple and efficient—because money is the generally-accepted medium of exchange, the most marketable good.
Musk’s claim that money could become dispensable (and thus lose its purchasing power) therefore presupposes that uncertainty about the future in human action can (or will) disappear. At first glance, one might think this could be a consequence of AI, robotics, or other technological leaps.
Upon closer examination, however, this conclusion cannot be upheld. There are two main reasons for this: First, nature, in which humans live, cannot (based on all experience) be perfectly predicted. Circumstances change, often in completely unforeseeable ways—natural disasters (volcanic eruptions, floods, etc.) unexpectedly occur, or regions previously uninhabitable suddenly become livable due to changing weather patterns. Nature constantly brings uncertainty that humans must deal with.
The second—and for this question decisive—reason is this: Human action itself cannot be predicted using scientific methods. Already Ludwig von Mises (1881–1973) pointed out that human action cannot be explained, is not predictable on, say, the basis of external or internal biological or chemical factors:
The sciences of human action start from the fact that man purposefully aims at ends he has chosen. It is precisely this that all brands of positivism, behaviorism, and panphysicalism want either to deny altogether or to pass over in silence. Now, it would simply be silly to deny the fact that man manifestly behaves as if he were really aiming at definite ends. Thus the denial of purposefulness in man’s attitudes can be sustained only if one assumes that the choosing both of ends and of means is merely apparent and that human behavior is ultimately determined by physiological events which can be fully described in the terminology of physics and chemistry.
Even the most fanatical champions of the “Unified Science” sect shrink from unambiguously espousing this blunt formulation of their fundamental thesis. There are good reasons for this reticence. So long as no definite relation is discovered between ideas and physical or chemical events of which they would occur as the regular sequel, the positivist thesis remains an epistemological postulate derived not from scientifically established experience but from a metaphysical world view. The positivists tell us that one day a new scientific discipline will emerge which will make good their promises and will describe in every detail the physical and chemical processes that produce in the body of man definite ideas. Let us not quarrel today about such issues of the future. But it is evident that such a metaphysical proposition can in no way invalidate the results of the discursive reasoning of the sciences of human action.
Hans-Hermann Hoppe later gave Mises’s argument a rigorous action-logical foundation: Humans are characterized by the ability to learn (Lernfähigkeit). The ability to learn means, first and foremost, that an acting person cannot already know today his own future stock of knowledge—nor that of all others—which will determine future action.
The reason: One cannot contradictorily deny human learning capacity; the negation of the statement “I can learn” is logically self-contradictory—it is true a priori. If you say, “Humans cannot learn,” you commit a performative contradiction: by making that statement, you assume your interlocutor does not yet know it but can learn it—otherwise you wouldn’t bother saying it.
(Incidentally: teachers, professors, and scientists in particular all presuppose that humans can learn. Otherwise they would not even try to discover and disseminate new knowledge—for themselves or others. Any professor who would deny the ability to learn would be a cynic, perhaps even a charlatan.)
And if you say, “Humans can learn not to learn,” you presuppose the ability to learn—namely, that one can learn that one cannot learn—which is obviously false and an open contradiction. Since the ability to learn of acting humans cannot be denied without contradiction—and is therefore logically true a priori—one also cannot know today how people will act in the future: The actor neither knows his own future knowledge that will determine his actions, nor can he know today the future knowledge of others that will shape their actions.
One may believe that humans will someday perfectly predict future natural forces—that is a debatable proposition. What cannot be maintained, however, is that future human action will become predictable or can be charted like an impulse-response function (“if A happens, then B follows”).
Of course, this does not mean everything in human action is uncertain and unpredictable—nor that everything is certain. Rather, for purely logical reasons, where there is certainty there must also be uncertainty; and where there is uncertainty there must also be certainty. The logic of human action tells us that there are things in human action we know with apodictic certainty: that humans act; that the actor pursues goals he seeks to achieve with scarce means; that action necessarily requires time, making time an indispensable means for the actor; and more. But the logic of human action also tells us: Scientifically, we cannot know with certainty how and when humans will act in the future—and the reason is that humans can learn, a statement that cannot be denied without contradiction and is therefore true a priori.
As long as future human action takes place under uncertainty—as long as there are aspects of human action subject to uncertainty—the reason remains why people will continue to demand money in the future (no matter how technologically advanced it may be). This is why money will retain value for people and cannot become irrelevant.
Or does Elon Musk perhaps believe that future humans will operate under a “different logic” than we do today? That would be difficult or impossible to conceive. For “our logic” is the precondition of any coherent thought whatsoever. One cannot even formulate the sentence “logic could change” without relying on our current logic—specifically on the law of non-contradiction (that the same statement cannot be true and false at the same time, in the same sense).
Any future being (human, post-human, AI entity, alien, etc.) that we regard as capable of coherent thought, communication, or science would have to use the same fundamental logical principles we use—because those principles are what make “coherent thought” possible for us in the first place. One may speculate: perhaps superintelligent AIs or uploaded consciousnesses will one day think in ways literally unimaginable to us, operating with a “new logic.” But even such a thought stands on the ground of the logic we know: Any being that claims “our logic is different from yours” already presupposes our logical categories of identity, non-contradiction, and difference.
If a counterpart truly had a different logic, we would very likely be unable to understand it at all—let alone communicate with it. In fact, it would be questionable whether such a counterpart would even appear to us as human. Therefore, if Elon Musk truly expects money to someday become irrelevant to humans, this could only happen in a world incomprehensible to us—one in which at least logic and human action no longer apply.