Ethereum co-founder Vitalik Buterin has publicly endorsed artificial intelligence firm Anthropic as it faces a critical deadline from the U.S. Department of War over the military's use of its AI technology. The standoff centers on Anthropic's refusal to allow its AI models to be used for fully autonomous weapons platforms and mass surveillance of American citizens.
In a post on X, Buterin directly addressed Anthropic CEO Dario Amodei, stating, "It will significantly increase my opinion of Anthropic if they do not back down, and honourably eat the consequences." He characterized the company's stance as "actually a very conservative and limited posture, it’s not even anti-military," adding that "Fully autonomous weapons and mass privacy violation are two things we all want less of."
The confrontation escalated this week as U.S. War Secretary Pete Hegseth gave Anthropic a deadline of Friday at 5 p.m. to grant the military broad access to its AI models. According to reports from CNBC and Reuters, if Anthropic declines, Hegseth has threatened to designate the company a "supply chain risk"—a label typically applied to foreign adversaries—or recommend invoking the Defense Production Act, which grants the government sweeping authority to direct domestic industry for national security.
Anthropic, which secured a $200 million Department of War contract last year and was until recently the only AI firm authorized on U.S. classified networks, has maintained its position. A company spokesperson told Reuters the talks were "continued good-faith conversations" aimed at supporting national security "reliably and responsibly." Pentagon spokesman Sean Parnell framed the issue operationally, stating partners must help U.S. forces "win in any fight," while a senior official warned disentangling from Anthropic would be painful.
The commercial and precedent-setting implications are significant. Reuters and Axios report this is a battle over whether AI providers can enforce product-level guardrails once embedded in national security systems. Government-contracts lawyer Franklin Turner noted punitive action would be "unprecedented" and likely trigger litigation. The dispute unfolds as a Citrini Research report titled "The 2028 Global Intelligence Crisis" warns of AI-driven dystopian risks, including automation pushing U.S. unemployment over 10%, which has pressured AI-linked stocks.
Anthropic, recently valued at $380 billion after a $30 billion funding round, generates roughly 80% of its revenue from corporate clients. Disgraced FTX co-founder Sam Bankman-Fried was an early investor, committing $500 million before his exchange's collapse.