Mises Wire

Why AI and Big Data Cannot Plan an Economy

AI data

Contemporary academia’s equation of “science” with “measurement” represents a positivist assault on Ludwig von Mises’s praxeology, not due to his mathematical deficiency, but his superior ontological grasp.

While physics successfully models inanimate bodies—where a copper atom reacts to heat according to immutable, universal constants—economics concerns human action driven by conscious, fluctuating intent. Because no constant dictates that a 10 percent income rise yields an 8 percent consumption increase, economic data is merely unrepeatable history, not scientific law.

Attempting to extract universal predictions from this historical debris is “historicism”—a methodological error akin to using Napoleonic War statistics to forecast World War III. Friedrich Hayek identified this blind imitation of natural sciences as “scientism”: a structural blindness where the distinct ontology of human choice is ignored in favor of a false, mechanistic precision. Thus, there is an ontological divergence between “science” and “scientism.”

The Lucas Critique: When Mathematics Bowed to Logic

Modern economics was forced—with a fifty-year delay—to concede the validity of Mises’s position. In 1976, Robert Lucas devastated Keynesian macroeconomics by proving that one cannot predict the effects of a policy change entirely on the basis of past correlations. This is now known as the Lucas Critique. When a government implements a policy, the public alters its expectations, and the structural parameters of behavior shift.

Mainstream economics belatedly conceded the problem of “parameter variability”—validating Mises’s insight on the absence of economic constants—through the 1976 Lucas Critique, which proved that policy changes alter public expectations, rendering past correlations useless. However, Lucas did not abandon modeling; he invented “rational expectations,” assuming humans match the intelligence of model-builders. Lucas identified the terminal illness but prescribed a more sophisticated poison. Mises—unbound by formulaic necessity—saw reality in its raw form: markets are processes, not equilibria; the future features uncertainty, not calculable risk.

Mises eschewed mathematics in economic theory to remain realistic; the neoclassicals distorted reality to fit their equations.

False Precision: From The Lucas Critique to Financial Collapse

Critics argue: “Science requires precision!” This is our age’s greatest lie. Mathematics in economics manufactures false precision.

Consider Karl Marx: Das Kapital overflows with algebraic equations proving exploitation rates. Yet his premise—the labor theory of value—was fallacious. Mathematics created sophisticated but fundamentally flawed modeling; formulas guarantee no truth.

The 2008 Financial Crisis was the ultimate test, pitting Austrian methodology against the mainstream. Central banks wielded models like the Gaussian Copula and Value-at-Risk—calibrated on 50 years of data—claiming the probability of a housing collapse was one in a billion. The result? Complete implosion due to structural blindness—models assumed calculable risk (casino-like) rather than uncertainty (history-like). They mistook mathematical distributions for unpredictable human choice.

Ben Bernanke, Chairman of the Federal Reserve, despite having access to unprecedented data, consistently denied reality because his models could not account for it. In 2005, he stated: “We’ve never had a decline in house prices on a nationwide basis.” In May 2007, he claimed: “We do not expect significant spillovers from the subprime market.” Bernanke lacked the correct theory, not data.

Austrian economists—using the logic of Mises-Hayek Business Cycle Theory, without complex models—predicted the catastrophe years ahead: “When central banks artificially suppress interest rates, malinvestment occurs, and bubbles must burst.”

Most damning was the collapse of Long-Term Capital Management in 1998. Nobel Laureates using the Black-Scholes model calculated that a catastrophic loss was virtually impossible within the lifetime of the universe. When Russian debt defaulted in August 1998, decades-held correlations decoupled instantly—a phenomenon Nassim Taleb calls the Ludic Fallacy: mapping casino probabilities onto human history. Technocrats treat the economy as ergodic, where the past maps the future; Mises identified it as non-ergodic, where one black swan renders a trillion data points obsolete. Logic without math triumphed over math without logic.

The Calculation Problem: Society Cannot Be Engineered

In the socialist calculation debate, socialists claimed supercomputers—solving all supply-demand equations—would eliminate the need for markets. Mises retorted: Your problem isn’t processing power; it is absent subjective data.

Economic information—knowledge of time, place, subjective tastes, and fleeting opportunities—disperses within millions of minds and is tacit. Information only reveals itself when individuals act and prices form. Before action, the data does not exist to feed the computers.

This was a logical necessity. The Soviet collapse confirmed it, but history wasn’t even required. Mises demonstrated that socialism is impossible, not merely inefficient. Paul Samuelson predicted across decades that the Soviet economy would catch America. In 1989, months before the collapse, he wrote in his textbook: “The Soviet economy is proof that a socialist command economy can function and thrive.” Data said one thing; reality said another.

The Computational Fallacy: Calculation Is Not Economics

A new argument emerges with Large-Language Models: Mises was right in 1920, but now we have the processing power to simulate entire economies, or so a common argument goes. This fundamentally confuses engineering with economics.

  • Engineering Problem: Build a bridge. Physics laws are known. Material strength is known. The objective—carry a 10-ton load—is fixed. AI calculates the optimal design.
  • Economic Problem: Build the bridge or use those resources for a hospital? AI cannot answer this because value is not an objective property measurable with instruments—it is a subjective judgment existing only at the moment of choice.

No common objective unit compares bridge-value versus hospital-value. The only method is market prices derived from the free exchange of private property. AI without prices is like engineers knowing physics formulas but not project goals. Infinite processing cannot calculate subjective value not yet demonstrated in action.

Digital socialists point to Amazon or Walmart’s logistics as proof of planning—a category error. These firms are price-takers, utilizing external market prices of land, labor, and capital. Without those signals, algorithms have no common denominator for calculation. This appeared in the failed Soviet OGAS cybernetics experiments in the 1960s-70s. Despite proto-internet factory links solving linear programming, the system failed because inputs—factory data—were systematically falsified by agents seeking bureaucratic quota compliance rather than consumer satisfaction. No computational power compensates for corrupted feedback loops without private property.

AI as Prisoner of the Past

Generative AI’s greatest weakness is its inherent historicism. Systems train on trillions of terabytes of past data, predicting next patterns via statistical probability.

But Austrian entrepreneurship theory isn’t discovering patterns; it concerns alertness to opportunities for which no data exists. Feed all 2005 data into an AI, and it might design a better Nokia, but never the iPhone. The iPhone wasn’t in the data; it was in Steve Jobs’s vision, plans, and actions to which consumers responded positively. An AI-planned economy would be static—optimizing the status quo but unable to generate radical innovation—which, by definition, breaks past patterns. Progress stops.

LLMs predict from probability distributions—bell curve extrapolation. Economic progress is driven by entrepreneurial jumps—outliers that models discard as noise. If an AI in 1870 optimized lighting, it would suggest efficient kerosene wicks; never the incandescent bulb, as parameters for such leaps didn’t exist in the data. AI optimizes the mean; markets often reward the exception.

Tacit Knowledge: That Which Cannot Be Uploaded

Friedrich Hayek’s 1945 masterpiece, “The Use of Knowledge in Society,” and Michael Polanyi’s concept of tacit knowledge killed technocratic dreams. Vast economic knowledge cannot be converted into bits and bytes.

A surgeon’s hand dexterity, a chef’s olfactory sense, a trader’s gut instinct, a mechanic’s recognition of engine hums all represent tacit knowledge: 1) inarticulate—it cannot necessarily be verbalized and quantified; 2) local—tied to specific time and place; 3) fleeting—constantly changing; and, 4) internal and subjective.

Digital socialists imagine uploading society’s knowledge to servers. But you cannot upload what you cannot articulate. Free markets—through prices—allow individuals to utilize tacit knowledge without central reporting. In AI command economies, vital knowledge is erased because it is not capturable “data.” The result would be systems working perfectly “on paper,” but failing catastrophically in reality.

Autonomous vehicle failure exemplifies this perfectly. AI handles explicit traffic laws but fails at the social negotiation of a four-way stop—subtle tacit knowledge exchanged through eye contact or hand waves. This is Hayek’s “knowledge of particular circumstances,” embedded in local actors’ gut instincts. Because it is never formally captured—only acted upon—it cannot be uploaded to central servers. A data-driven economy is therefore socially blind, incapable of navigating the millions of micro-negotiations that prevent systemic gridlock.

The Problem of Incentive and Skin in the Game

Assume supercomputers could perfectly predict preferences. What is the error-correction mechanism?

In free markets, the threat of bankruptcy disciplines and limits firms. Entrepreneurial mistakes evaporate billions. Brutal pressure transfers resources from the incompetent to the competent. But what risk does government AI face? If algorithms order the production of unwanted shoes, does the algorithm go bankrupt? No. The public pays through inflation and shortages.

State AI lacks “skin in the game.” Without the threat of existential ruin, there is no guarantee that calculations serve public welfare. AI can easily optimize for political regime survival rather than consumer well-being.

The Alchemist’s Error

Reducing economics to data science is a category error comparable to astronomers seeking Newtonian formulas for choosing between pizza and a cheeseburger, ignoring the internal volition driving the system. Mises would probably view AI as a magnificent tool for aiding price discovery, but substituting it for the market mechanism repeats the Soviet disaster with higher-fidelity echoes. Modern technocrats are alchemists—transmuting mathematical symbols while remaining blind to the substance of human action. Without the decentralized demonstration of preference through prices and the discipline of profit and loss, AI is merely a blind giant optimizing the fossilized record of the past, structurally incapable of calculating the future.

image/svg+xml
Image Source: Adobe Stock
Note: The views expressed on Mises.org are not necessarily those of the Mises Institute.
What is the Mises Institute?

The Mises Institute is a non-profit organization that exists to promote teaching and research in the Austrian School of economics, individual freedom, honest history, and international peace, in the tradition of Ludwig von Mises and Murray N. Rothbard. 

Non-political, non-partisan, and non-PC, we advocate a radical shift in the intellectual climate, away from statism and toward a private property order. We believe that our foundational ideas are of permanent value, and oppose all efforts at compromise, sellout, and amalgamation of these ideas with fashionable political, cultural, and social doctrines inimical to their spirit.

Become a Member
Mises Institute