The Trump administration has formally defended the Pentagon's decision to blacklist AI company Anthropic, setting the stage for a high-stakes legal battle that could cost the company billions of dollars. In court documents filed on March 17, 2026, the Justice Department argued that Defense Secretary Pete Hegseth's March 3 designation of Anthropic as a "national security supply chain risk" was legally sound and driven by security concerns, not retaliation.
The dispute centers on Anthropic's refusal to remove safety guardrails from its Claude AI assistant that prevent its use in autonomous weapons systems and domestic surveillance. The Pentagon stated these limits were "unacceptable" and that allowing Anthropic continued access to military systems would introduce an "unacceptable risk" into defense supply chains. Government officials expressed concern that Anthropic could "disable its technology or preemptively alter the behavior of its model" during active military operations if it felt its policies were violated.
Anthropic initiated two federal lawsuits challenging the designation. The main lawsuit, filed in California federal court on March 9, calls the blacklisting "unprecedented and unlawful" and argues it violates First Amendment free speech and due process rights. A second lawsuit was filed in a Washington, D.C. appeals court.
The Justice Department countered that the issue is about contract conduct, not protected speech. The filing notes, "For national security reasons, the terms of service for plaintiff Anthropic PBC’s artificial intelligence (AI) technology have become unacceptable to the Executive Branch." It further contends that Anthropic's overall attitude during negotiations led the Department of Defense to second-guess whether the company would be a good fit.
Microsoft, which both uses Anthropic's Claude model and supplies the U.S. military, filed an amicus brief supporting Anthropic, warning that the designation could damage the broader AI ecosystem. "This is not the time to put at risk the very AI ecosystem that the administration has helped to champion," Microsoft wrote.
Anthropic executives have warned the blacklisting could cause billions in losses in 2026. The company maintains that AI is not yet safe enough for autonomous weapons and that it opposes domestic surveillance on principle. Despite the conflict, Anthropic co-founder Dario Amodei noted the firm and the government "broadly share the same objectives" and expressed greater concern about AI-made bioweapons and Chinese interference than about AI use in warfare.