Anthropic's AI Empire Cracks Under Weight of Ambitions
A 47 percent collapse in Claude's code quality has exposed cracks in Anthropic's business model, as developers abandon the tool and the company scrambles for capital to sustain its compute-heavy growth strategy.
The code was falling apart. Developers who once praised Claude found themselves watching their AI assistant produce broken programs with alarming frequency, prompting a quiet exodus to competitors.
Anthropic's $900 billion valuation now faces its most serious stress test. A 47 percent decline in code quality has emerged across the platform, revealing how fragile the company's subsidized computing model really is.
"Right now, from five weeks ago to today, the code quality is over 47.3 percent worse than when it was first released," said Dave Kennedy, CEO of cybersecurity firm TrustedSec. "It's really bad, I mean unusably bad." Kennedy's analysis tracked defects, security vulnerabilities and task completion rates across thousands of coding sessions.
The problems extend far beyond occasional glitches. Independent testing shows Claude has become unreliable for complex engineering work at the precise moment Anthropic races to raise new capital for its insatiable compute demands.
Stella Laurenzo, senior director of AI at AMD, analyzed 6,852 Claude Code sessions containing 234,760 tool calls. Her team found the AI had regressed to the point where it could not be trusted with critical engineering tasks. Laurenzo published her findings in a detailed GitHub report, and her engineers switched to a competitor for their most important work.
Veracode's independent testing confirmed the deterioration. Claude Opus 4.7 introduced vulnerabilities in 52 percent of coding tasks. OpenAI's models produced vulnerabilities in roughly 30 percent of similar tasks. The gap stems from corporate choices to prioritize speed and cost over reasoning depth.
Anthropic's own April 23 engineering postmortem laid out the damage. Three distinct missteps degraded output quality across the platform. The company reduced reasoning effort from high to medium to manage latency. A caching bug made Claude appear forgetful and repetitive. System prompts capped response length. Each decision cut costs.
The shortcuts emerged as Anthropic wrestled with an unprecedented 80-fold growth spike in the first quarter. CEO Dario Amodei spoke candidly about the pressure at a recent conference.
"We planned for a world of 10x growth per year," Amodei said. "In Q1 2026, we saw 80x annualized growth per year in revenue and usage."
The company now operates at a $30 billion annualized revenue rate while pursuing a $50 billion fundraise at a $900 billion valuation.
Explosive growth collided with fixed compute costs, forcing Anthropic into quality-degrading tradeoffs. Demand surged while infrastructure buckled under subsidized usage. The pattern proves brute-force model scaling cannot survive without continuous capital injections.
The failures carry implications beyond Anthropic's balance sheet. AI systems still cannot be trusted with critical infrastructure when they introduce vulnerabilities in more than half of coding tasks. The reality undermines calls for government regulation to enforce AI safety standards when basic product reliability remains out of reach.
Anthropic's scramble for compute resources reveals the deeper financial strain. The company secured full access to SpaceX's Colossus 1 data center, equipped with more than 220,000 NVIDIA GPUs and 300 megawatts of power. It also announced a $1.5 billion joint venture with Blackstone, Hellman & Friedman and Goldman Sachs for enterprise AI services.
"Demand for Claude has grown at an unprecedented rate, and our infrastructure has been stretched to meet it," Anthropic told Fortune in an April statement. The company acknowledged inevitable strain on its systems while pledging expanded partnerships with Amazon and Google for additional capacity.
The current AI land grab has proven economically unsustainable without continuous capital infusion. Anthropic's performance collapse shows even the best-funded technology companies cannot maintain product integrity when their business models depend on subsidized computing and brute-force scaling. Regulators contemplating premature oversight should take note: these systems still cannot perform their core functions reliably.
Developers who once championed Claude as a revolutionary tool now face a reliability crisis. The quiet erosion of trust among enterprise users poses a greater threat than immediate cancellations. Each abandoned project, each engineer switching tools, signals a simple truth: the economics behind today's AI promises cannot sustain the performance they demand.