The U.S. Court of Appeals for the District of Columbia Circuit has rejected AI company Anthropic's emergency motion to pause the U.S. Defense Department's designation of the firm as a national security supply chain risk. A three-judge panel denied the stay on April 9, 2026, ruling that the government's interest in securing AI technology during active military conflict outweighed any financial or reputational harm to Anthropic.
The decision leaves in place part of the Pentagon's official designation of Anthropic's products as a "supply-chain risk to national security." This label, unprecedented for an American company, blocks Pentagon contractors from using Anthropic's AI models and could set a chilling precedent for other tech firms that resist government demands. The panel stated, "In our view, the equitable balance here cuts in favor of the government... On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict."
The dispute originated from a July 2025 deal for Anthropic's AI model Claude to become the first large language model approved for use on classified networks. Negotiations collapsed in February 2026 when the government sought to renegotiate, insisting on unrestricted military use of Claude. Anthropic refused, citing ethical principles against lethal autonomous weapons and mass domestic surveillance.
In late February, President Donald Trump ordered all federal agencies to stop using Anthropic products, accusing the company of a "disastrous mistake trying to strong-arm the Department of War." Anthropic sued the administration in March, calling it an "unlawful campaign of retaliation." A California district court issued a preliminary injunction against the Pentagon's designation in late March, temporarily halting Trump's directive and branding it "Orwellian."
However, due to federal procurement law, Anthropic must challenge the designation on two separate legal tracks: constitutional grounds in California and under the specific statute at the D.C. Circuit. The appellate ruling acknowledged Anthropic will "likely suffer some degree of irreparable harm" but emphasized the need for "substantial expedition." Acting U.S. Attorney General Todd Blanche hailed the decision as a "resounding victory for military readiness," asserting that "military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company."
This legal battle unfolds as Anthropic solidifies its position as an AI titan. Following a landmark Series G funding round valuing the company at $380 billion, Anthropic reported a $30 billion revenue run rate. According to Palisade Research expert Dave Kasten, the firm's dominance stems from four key advantages: superior vulnerability detection through reasoning-based discovery, progress toward automated AI researchers that accelerate development cycles, a first-mover advantage in "Defensive AI" for government and enterprise sectors, and a qualitative preference among professional coders for its Claude model's reliability and depth.