The European Commission confirmed on Monday that it is pursuing separate dialogues with OpenAI and Anthropic over frontier AI model safety, but the nature of those talks could not be more different. Commission spokesman Thomas Regnier told reporters that OpenAI proactively offered access to its newest model, while after “four or five” meetings with Anthropic, access to the latter’s latest system has not even been discussed.
“With one (OpenAI), you have a company proactively offering to give access to the company. With the other one (Anthropic), we have good exchanges though we’re not at a stage where we can speculate on potential access or not,” Regnier said. The EU’s AI Act is now rolling out in stages, and although companies are not required to submit pre‑market model reviews, the Commission is clearly eager to get under the hood of the most powerful systems.
The regulatory maneuvering comes as on‑chain markets now price Anthropic at a staggering $1.4 trillion implied pre-IPO valuation, according to pre‑IPO instruments listed on Jupiter. The figure, up 40% in just 24 days and 1,067% since October 2025, marks the highest level yet for the Claude maker before any traditional stock market listing. The instruments are backed 1:1 by SPV exposure, allowing crypto‑native traders to perform price discovery long before Wall Street’s IPO roadshow.
Behind the valuation frenzy are explosive revenue numbers: Anthropic’s annualized revenue has soared from $100 million in 2023 to $45 billion today, a 1,400% increase over twelve months. The company is turning Claude into high‑stakes commercial applications. FIS revealed that Claude powers its Financial Crimes AI Agent, designed to slash anti‑money‑laundering investigation times from hours to minutes while keeping all client data inside FIS‑controlled systems. BMO and Amalgamated Bank are among the first to deploy the agent, with broader availability planned for the second half of 2026.
The EU access gap widened further when OpenAI announced on Monday that it will give GPT‑5.5‑Cyber, a cybersecurity‑tuned version of its latest model, to European governments, agencies, and institutions, starting with vetted cybersecurity teams. That contrasts with Anthropic’s decision to hold back its Mythos model after internal tests showed it could find exploitable software bugs at scale. George Osborne, OpenAI’s Head of Country, stressed that “an AI lab such as our own cannot unilaterally judge whether a system is secure” and called for Europe’s defenders to get the technology, not just a select few. The UK AI Safety Institute had separately found that GPT‑5.5 could reverse‑engineer a virtual machine and outpace a human expert on a complex challenge.
Meanwhile, Washington is eyeing mandatory pre‑release checks on the most powerful AI models, a move that gained steam after the Mythos episode. For now, Brussels is left with one door wide open and another only slightly ajar.