OpenAI Wants Taxpayers to Foot the Bill for AI's Social Fallout
OpenAI proposes wealth taxes and a robot tax to fund social programs for AI job displacement, while funding politicians who oppose AI regulation.
Four days after OpenAI demanded higher taxes on the wealthy to fund social programs for AI job displacement, a 20-year-old suspect hurled a Molotov cocktail at CEO Sam Altman's San Francisco home. The April 10 attack exposed raw public anxiety about artificial intelligence just as the company accelerating the disruption now wants governments to tax productive citizens to clean up the mess.
OpenAI released a 13-page policy document April 6 proposing higher taxes on capital gains, corporate income, and a "robot tax" on automated labor. The $852 billion company recommends "rebalancing the tax base by increasing reliance on capital-based revenues—such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns—and by exploring new approaches such as taxes related to automated labor." The document also calls for a public wealth fund distributing AI returns to citizens and a four-day workweek with no pay cuts.
The company driving job destruction now wants taxpayers to fund the fallout. OpenAI generated $13 billion revenue in 2025, while Goldman Sachs estimates AI already cuts roughly 16,000 U.S. jobs monthly. The proposals effectively privatize corporate gains while socializing the costs of disruption onto taxpayers.
The document proposes taxpayer-funded income support through a four-day workweek with no pay cuts. Employers must maintain output and service levels while reducing hours—meaning companies absorb the labor cost shift to the state. This arrangement creates government-subsidized employment while preserving corporate profit margins.
OpenAI's political donations contradict its policy proposals. President Greg Brockman donated $25 million to MAGA Inc and $25 million to Leading the Future in 2025, with another $25 million pledged for 2026. Leading the Future's mission is to defeat politicians who would regulate AI more strictly. "Now they just need to stop spending hundreds of millions of dollars to defeat candidates who run on these policies!" stated Jeremy Slevin, senior advisor to Senator Bernie Sanders, on social media. Slevin's comment highlights the contradiction between OpenAI's policy proposals and its political spending to block those same policies.
The policy document aligns with a deregulatory environment while pushing state-funded social programs. President Trump signed an executive order in December 2025 reducing AI regulation and preventing "cumbersome" state rules. David Sacks, former White House AI czar, warned at Davos in January 2026 that state-level AI laws could cause America to "lose the AI race," dismissing safety concerns as "a fit of pessimism" and "a self-inflicted injury."
OpenAI has a documented history of opposing similar proposals at the state level. The company opposed California's SB1047, which proposed third-party audits, safety protocols, and a public compute cluster for startups—measures that align with OpenAI's current "Right to AI" proposals. Eryk Salvaggio of Tech Policy Press called the document a "policymercial"—marketing copy dressed as policy that sidesteps critique of present systems by shifting conversation to future problems.
Sam Altman's history with universal basic income reveals the company's long-standing interest in taxpayer-funded handouts. In 2016, Altman posted on Y Combinator advocating for UBI as essential for an AI-driven future. Between 2020 and 2023, OpenAI funded a study providing $1,000 monthly payments to 1,000 participants. The results showed recipients worked approximately 1.3 hours less per week, confirming what critics have long argued about guaranteed income reducing work participation.
The study's own researchers acknowledged the reduction in labor supply. Eva Vivalt, assistant professor of economics at the University of Toronto and principal investigator on the OpenAI UBI study, stated: "We observed moderate decreases in labor supply. From an economist's point of view, it's a moderate effect." She added that participants were "doing more stuff" and valued having more leisure time, framing reduced work as increased well-being.
This framing ignores the erosion of human dignity that comes from taxpayer-funded dependency. Work provides purpose, community, and self-worth beyond mere income. The OpenAI study's conclusion that reduced hours improve well-being dismisses the fundamental human need for meaningful contribution and economic self-reliance.
Hours after the attack on his home, Altman acknowledged public anxiety in a blog post. "The fear and anxiety about AI is justified," he wrote. "We are in the process of witnessing the largest change to society in a long time, and perhaps ever." The admission came as Quinnipiac polling shows 80 percent of Americans concerned about AI, with 55 percent believing it does more harm than good and 70 percent expecting reduced job opportunities.
The Molotov attack illustrates the desperation underlying public anxiety about AI's future. Daniel Alejandro Moreno-Gama, arrested April 10, had written online about AI leading to human extinction and stated Altman was "consistently reported to be a pathological liar." His actions represent the extreme end of public fear, but the sentiment extends far beyond violent extremism.
The document proposes a "New Deal for the AI age" while the company funds candidates who oppose regulation. OpenAI's dual strategy reveals an attempt to maximize corporate freedom to operate while shifting the political and financial burden of managing disruption onto the state. The contradiction between OpenAI's proposals and its political spending exposes a fundamental tension: the company wants government to fund social programs but not regulate its core operations.
Wealth taxes and public wealth funds represent a fundamental attack on property rights and economic freedom. By seizing capital gains and corporate income to redistribute through government-controlled mechanisms, these proposals undermine the very foundations of free-market capitalism. The public wealth fund would nationalize a portion of AI-driven economic growth, placing investment decisions in the hands of bureaucrats rather than individuals.
The proposals amount to a bailout for the consequences of corporate-driven automation. OpenAI wants the government to assume financial responsibility for job losses the company's technology creates. This approach rewards corporate risk-taking while socializing the costs, creating a system where profits remain private and losses become public obligations.
Hours after the attack, the company responsible for accelerating disruption wants citizens to trust its vision for the future. The policy document promises a better world built on taxpayer-funded handouts and government-managed wealth distribution, the same promise every totalitarian communist state has made. The path to that future leads through the erosion of property rights, economic self-reliance, and the human dignity that comes from meaningful work.