Google, Microsoft, and xAI have agreed to provide the U.S. government with early access to their frontier artificial intelligence models for national security testing before public release, the Commerce Department’s Center for AI Standards and Innovation (CAISI) announced Tuesday. The move follows a report that the Trump administration is weighing an executive order on AI oversight, sparked in part by Anthropic’s Claude Mythos model, which demonstrated exceptional skill in discovering cybersecurity vulnerabilities.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said CAISI Director Chris Fall. The agency has already conducted more than 40 evaluations of pre‑release models, often reviewing versions with relaxed safety guardrails to assess worst‑case risks such as automated cyberattacks. The White House has held meetings with executives from Anthropic, Google, and OpenAI to discuss a potential working group that would review advanced AI systems prior to launch.
Anthropic has limited access to Claude Mythos rather than releasing it broadly; Mozilla used it to find and patch 271 vulnerabilities in the Firefox browser. On prediction platform Myriad, the market assigns only a 13% chance that Mythos will see a broad release by June 30. The administration’s interest in oversight marks a shift from its earlier deregulatory stance—President Trump revoked a 2023 Biden executive order that required AI safety testing disclosures—and comes amid a contract dispute with Anthropic over unrestricted model access. The Defense Department recently expanded its own AI partnerships, signing agreements with seven firms to deploy capabilities across classified networks.