Anthropic, the AI company behind the Claude model, is publicly challenging the prevailing Silicon Valley wisdom that massive spending guarantees success in artificial intelligence. Company President Daniela Amodei is promoting a "do more with less" philosophy that directly contradicts the industry's current race to pour unprecedented capital into computing infrastructure.
The contrast is stark: OpenAI has made commitments worth approximately $1.4 trillion for computing power and infrastructure, establishing enormous data centers and securing advanced chips years in advance. This approach is based on scaling laws—the principle that increasing computing power, data, and model size predictably improves performance—which now underpins the entire financial structure of AI competition.
"A lot of the numbers that are thrown around are sort of not exactly apples to apples, because of just how the structure of some of these deals are kind of set up," Amodei told CNBC, highlighting how companies feel pressured to commit early for hardware delivery years later.
Despite advocating efficiency, Anthropic isn't operating on a shoestring budget. The company has around $100 billion in computing commitments and acknowledges that future requirements will be "very large" to stay at the technological frontier. Amazon recently powered Anthropic's Claude model with its new Rainier AI infrastructure featuring over one million Trainium2 chips.
Anthropic's alternative strategy focuses on three key areas: using higher quality training data, applying post-training techniques to improve model reasoning, and making product decisions that reduce operational costs and enhance scalability for customers. This emphasis on efficiency comes as the industry grapples with AI compute demand growing twice as fast as Moore's Law, potentially requiring $500 billion annually until 2030.
The timing is significant as AI faces increased scrutiny from investors demanding proof of real-world value. After widespread testing in 2024 and deployment in 2025, the technology must now demonstrate reliability and financial viability. Investment in AI equipment and infrastructure could reach $500 billion in 2026, making efficiency crucial.
Amodei, who co-founded Anthropic with her brother Dario (the company's CEO and former Baidu and Google researcher), acknowledges the irony: "We have continued to be surprised, even as the people who pioneered this belief in scaling laws." She notes that colleagues frequently say "the exponential continues until it doesn't," and each year they've been proven wrong about growth limits.
The broader AI landscape faces additional challenges. Chinese competitors like DeepSeek are gaining traction with cheaper, open systems that anyone can modify—Chinese-made open systems now represent 17% of all downloads, according to MIT and Hugging Face research. Even OpenAI's Sam Altman has suggested his company may have chosen "the wrong side of history" by focusing on expensive, proprietary systems.
As 2026 unfolds, the central question becomes whether Anthropic's efficiency-focused approach will prove prescient or if overwhelming computational power remains indispensable in the AI race.