A federal judge has delivered a significant legal blow to the Trump administration, granting artificial intelligence company Anthropic a preliminary injunction that blocks the government from labeling it a "supply chain risk" and ordering federal agencies to cut ties. The ruling from Judge Rita F. Lin of the Northern District of California represents a major victory for Anthropic in its escalating legal battle with the Defense Department.
The dispute originated from a collapsed $200 million contract awarded to Anthropic in July 2025 by the Department of War's Chief Digital and Artificial Intelligence Office. Negotiations broke down after Anthropic insisted on ethical usage restrictions for its Claude AI model, specifically prohibiting its use for mass surveillance of Americans or lethal autonomous warfare. The Defense Department rejected these conditions.
Following the impasse, the government escalated dramatically. On February 27, after Anthropic refused to drop its restrictions, Secretary of War Pete Hegseth threatened the supply chain risk designation. President Trump then posted a directive on Truth Social ordering all federal agencies to "immediately cease" using Anthropic's technology, calling the company "radical left" and "woke." A formal supply chain risk designation was issued on March 3, marking the first time this classification—typically reserved for foreign hostile actors—had been applied to a domestic company.
Judge Lin's injunction, issued after hearing oral arguments, found the government's actions likely violated Anthropic's First Amendment and due process rights. The judge wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." The order blocks all three government actions, requires a compliance report by April 6, and restores the status quo from before February 27.
Legal experts note the case's significance for commercial speech protections, suggesting that ethical restrictions on technology use constitute protected expression. The ruling could establish important precedents for how companies may condition software access and limits the government's ability to weaponize "supply chain risk" designations in retaliatory ways.
The decision carries substantial consequences for the broader AI industry and government relations. Technology companies now have clearer legal standing to enforce ethical usage terms in government contracts, and the case highlights the growing tension between rapid technological advancement and established procurement processes. The ruling unfolds during a contentious election year, testing the limits of executive authority over private companies.