Bold reality check: if AI makes human labor obsolete, who ensures people don’t go hungry?
How will we be fed? This is the core question that’s rarely given serious attention amid the chatter about AI taking every job. The tech itself is formidable, yet similar fears have echoed since the Industrial Revolution, and most working-age adults still have work. The missing piece is a frank debate about what to do if this future actually arrives.
Even optimism from OpenAI’s Sam Altman—who says the future can be vastly better because AI will make us incredibly wealthy—rests on a risky bet that benefits will flow to almost everyone. For most people outside the circle of tech magnates, that assumption seems dubious. If AI fuels enormous prosperity, its distribution will still be a political struggle requiring open, hard conversations about who gets what from this new wealth.
Answering this issue has two parts. First, how do we design an economically effective system to redistribute the gains as machines take over and human labor’s share of income shrinks toward zero? The more critical question, though, concerns how this economic shift restructures power itself. Who determines what to tax once AI destroys labor income—the main government revenue in many advanced economies? Who decides how much everyday people—without equity stakes in the AI revolution—get to consume?
What happens in a world where machines generate most or all economic output and a handful of techno-billionaires decide how to allocate money, energy, minerals, and other resources to advance superintelligent systems? Who else has a say in whether to direct more resources toward healthcare, agriculture, or education?
“We need guardrails that preserve human agency, human oversight, and human accountability,” United Nations secretary-general António Guterres said at the AI Impact Summit in New Delhi. The future of AI “cannot be decided by a few countries or left to the whims of a few billionaires.”
Within AI circles, people discuss the alignment challenge—making sure machines act in line with the owners’ goals. The bigger hurdle is aligning those goals with society’s broader aims. AI will affect us all in consequential ways, yet our democratic tools often feel too feeble to curb the ambitions of the oligarchs steering these technologies.
History shows technological progress helped democracy spread by empowering an urban working class; politics adapted to represent them. But if ordinary work becomes unnecessary, will people’s power to influence government erode as a result?
In a practical frame, Anton Korinek and Lee Lockwood from the University of Virginia offer a primer on public finance in the AI era. They suggest consumer taxes will initially pick up slack as labor income drops. Yet, in a world dominated by AI-generated wealth, much of the returns from machines’ output could be reinvested, demanding a heavier tax on capital to shoulder the burden.
Some propose using taxes to slow the transition early on, while others—like Korinek and Joe Stiglitz from Columbia—argue for channels that steer investment toward technologies that help workers rather than replace them. They also discuss taxes on fixed factors (land, spectrum, data) or on monopoly rents that don’t enhance societal wellbeing.
The plan sounds workable in theory, but it relies on one big assumption: those who own disruptive technologies will agree to share. In the U.S., for example, overall tax revenue as a share of GDP sits below 26%, with capital taxation around just 2% of GDP. If labor income vanishes, tax structures would need to rise substantially to fund essential services.
Don’t count on quick changes. The OECD’s global tax agreement, finalized in 2021 to curb profit shifting by tech giants, faced political headwinds: while the Biden administration supported it, Donald Trump pulled the U.S. out in early 2025 after donations from tech leaders.
Some radical ideas could keep society afloat: directly distributing AI venture equity or even collecting taxes in the form of shares to build a public stake over time. A bolder option is for the government to expropriate a portion of AI equity upfront to redistribute wealth and ensure everyone has a stake in AI’s promised bounty. Korinek and Lockwood emphasize that automatic adjustments help manage radical uncertainty in AI development.
But grand visions face big obstacles. Governments would need to act before AI becomes overwhelming, which seems unlikely in today’s climate. Meanwhile, the tech oligarchs are pushing back against government constraints, and even antitrust efforts have struggled against entrenched power. Some observers note transnational tendencies: moneyed interests are exploring alternatives like “network-states” to sidestep traditional democratic governance if they can’t secure favorable policies at home.
If AI’s power continues to grow as expected, the only reliable path to keeping society fed may be a search for compromise with the very people who built this transformative technology. The question remains: will we collectively shape this future, or will a small cluster of moguls decide our fate? Would you support stronger shared governance or believe market forces alone will eventually allocate resources fairly? Share your thoughts in the comments.