In the spring of 1812, British textile workers smashed power looms across Nottinghamshire, convinced that the machines would make their skills worthless and their families destitute. They were right about the disruption. Mills did displace hand-weavers. Communities that had organized themselves around a particular kind of skilled labor were genuinely torn apart. The Luddites weren’t stupid, and they weren’t wrong to feel the ground shifting. They were wrong about one thing: the conclusion. The labor those machines displaced didn’t vanish. It migrated into factories, railways, cities, and industries that hadn’t existed, filling wants that hand-weavers in 1812 couldn’t have imagined needing to satisfy.
We are the Luddites now. Not the smashing machines part—the being right about the disruption and wrong about the conclusion part. Artificial Superintelligence will displace work on a scale that makes the power loom look modest. The disruption is real. The conclusion being drawn from it—permanent mass unemployment, the end of human economic relevance—is the same mistake, wearing better clothes.
What Production Actually Does
In 1803, Jean-Baptiste Say made an observation so simple that economists have spent two centuries finding ways to misread it: production is the source of purchasing power. When you produce something valuable, you generate income that creates demand for other goods. Supply and demand are not independent dials that can permanently fall out of alignment across a whole economy. New productive capacity doesn’t destroy demand—it transforms it, creates it, routes it toward things that didn’t previously exist.
This is why agriculture’s collapse from 40 percent of the American workforce in 1900 to less than 2 per cent today did not produce permanent unemployment for 38 percent of the population. Those workers became nurses, programmers, pilots, therapists, baristas, and UX designers—roles that barely existed or didn’t exist at all when their great grandparents were ploughing fields. The productivity gains from mechanization generated income and time to desire new goods, and markets routed labor toward satisfying those desires.
The ATM sharpens the point. When cash machines arrived, the consensus was that bank teller employment would collapse—why pay a human to do what a machine does more cheaply around the clock? What actually happened: lower operating costs per branch made it economical to open far more branches, and teller employment rose for decades after the ATM’s introduction. The humans moved up the value chain, handling relationships, mortgages, and decisions that required judgment and accountability. The “lump of labor,” as it always has, turned out to be elastic beyond the prophets’ imagination.
The Accountability Problem No One Is Talking About
The serious version of the AI unemployment argument concedes the history and says: this time the machine replaces not just physical labor or narrow tasks, but general intelligence—the creative and adaptive thinking that absorbed displaced workers in every previous wave. There will be nowhere left to go.
This argument has a surface logic that dissolves under pressure, but not for the reasons usually given. The problem isn’t just that human wants are infinite, or that new industries will emerge—though both are true. The deeper problem is that the argument misunderstands what markets actually trade.
Markets don’t just allocate cognitive output, they allocate accountability. When a surgeon operates, she puts her reputation, her license, and her livelihood on the line. When an entrepreneur launches a product, he bets his capital on a judgment call about what people want. When a lawyer advises a client, her future business depends on being right. This loop of consequence is not incidental to how markets work—it is how markets generate trustworthy signals and reliable behavior. Prices mean something precisely because the people setting them stand to lose if they’re wrong.
An AI has no stake in the outcome. It cannot be ruined. It has no reputation that compounds over time and no capital that evaporates on a bad call. This isn’t a sentimental preference for human warmth—it’s a structural feature of economic life that AI cannot replicate, because replication would require the AI to bear consequences, which entails owning things and suffering losses, which opens a different and far more interesting set of questions. Until that threshold is crossed, the accountability relationship between a professional and a client, a doctor and a patient, or an entrepreneur and a market retains irreducible economic value. It is not a vestige that productivity growth will eventually sweep away; it is load-bearing.
Add to this that human desire for goods and services with genuine human authorship—craft, not just output—tends to increase as automation makes the generic abundant. The more AI produces perfect, frictionless, optimized results, the more people will pay a premium for the imperfect, human-made alternative. This is already visible in food, furniture, music, and clothing. The scarcity of humans will not be a problem.
The Real Threat Isn’t Automation, It’s Monopoly
If the unemployment panic is largely a phantom, there is a genuine danger inside the AI revolution—and it is the one that free-market advocates are best positioned to name, because everyone else keeps mistaking it for its opposite.
Consider what happened to the internet. It was built on open protocols and decentralized architecture, a genuine commons. Then, gradually, it was enclosed: data privacy regulations that only large incumbents could afford to operationalize, intellectual property law extended and weaponized to prevent interoperability, licensing requirements that raised the floor for new entrants while leaving incumbents untouched. The result is a digital economy dominated by five platforms that use their regulatory relationships to neutralize competition more effectively than any trust ever managed through market power alone.
The same playbook is running in AI right now, and it is running faster. The EU AI Act—presented as a safety framework—establishes a tiered compliance structure that imposes costs on new entrants, while established players absorb them with ease. The EU’s own research institute, CEPS, estimated that setting up a compliant quality management system for a single high-risk AI product could cost a small firm up to €330,000—a rounding error for OpenAI or Google, but a company-ending burden for a startup in Warsaw or Lisbon. In the United States, OpenAI increased its federal lobbying spend nearly sevenfold in a single year, while over 460 organizations lobbied Congress on AI in 2024 alone—the overwhelming majority of them incumbents with the resources to shape whatever rules emerge in their favor. This is the standard logic of regulatory capture: use the language of public safety to construct a moat, then collect the toll.
When the state becomes the mechanism for allocating who can develop and deploy AI, the outcome is not a competitive market that distributes gains broadly. It is a cartel with a government seal. The unemployment that follows from that world would not be caused by machines being too productive, it would be caused by laws preventing people from accessing the machines at all.
What We Should Actually Be Demanding
The fear of AI-driven unemployment is understandable in the same way the Luddites’ fear was understandable—it correctly identifies a real disruption and then draws the wrong political conclusion from it. The Luddites wanted to smash the looms. Today’s equivalent is calling for AI licensing regimes, mandatory impact assessments, compute restrictions, and liability frameworks designed by incumbents to freeze the current hierarchy in place. These policies won’t slow the disruption, they will determine who profits from it.
The alternative is not naïve optimism about markets solving everything painlessly. The transition will have real costs, and some of them will fall on people who don’t have the resources to absorb them easily. However, the answer is not to restrict the technology; it is to remove every other barrier that prevents people from adapting. Occupational licensing laws that prevent workers from moving into new fields. Credentialing monopolies that gatekeep entire professions. Zoning and housing regulations prevent people from moving to where new opportunities are emerging. These are the friction points at which political intervention makes people worse off during a technological transition, and they are the issues worth fighting.
Beyond that, open-source AI development deserves aggressive defense, not as a techie preference but as a matter of economic freedom. Concentrated, proprietary, state-adjacent ASI is the scenario that should keep people up at night. Distributed, competitive, accessible ASI—the kind that lets a small entrepreneur in Budapest or Bangalore build something that competes with a Silicon Valley incumbent—is the scenario where Say’s Law gets to do its work, where the new purchasing power gets created and the new industries get discovered and the new roles for human labor emerge from the bottom up rather than being assigned from the top down.
The machine isn’t the threat, but the cage around the machine is. And right now, the people most loudly warning about the danger of AI are the ones most actively building the cage.