In a significant disclosure, AI company Anthropic has revealed a coordinated, industrial-scale campaign by three prominent Chinese AI firms to extract capabilities from its flagship Claude model. The companies—DeepSeek, Moonshot AI, and MiniMax—allegedly created over 24,000 fake accounts to generate more than 16 million interactions with Claude, using a technique known as distillation.
The activity, which Anthropic's security team began detecting in early 2024, specifically targeted Claude's most advanced capabilities, including agentic reasoning, tool use, and sophisticated coding functions. The company documented distinct patterns: DeepSeek focused on foundational logic and alignment, Moonshot AI targeted agentic reasoning and data analysis, while MiniMax concentrated heavily on agentic coding and orchestration, redirecting nearly half its traffic to immediately siphon new capabilities when Anthropic launched its latest Claude model.
Anthropic framed this not as a traditional hack but as a systematic violation of its terms of service and regional access restrictions, enabled by proxy services that resell access to frontier models and mask request origins. The attackers used "hydra cluster" architectures—large networks of fraudulent accounts spreading requests across Anthropic's API and third-party cloud platforms—making detection and takedown difficult.
The revelations arrive at a critical juncture in U.S.-China technology policy, coinciding with debates over relaxing export controls on advanced AI chips to China. Anthropic explicitly connected the attacks to chip access, noting the scale of extraction "requires access to advanced chips" and arguing that such distillation campaigns can undermine the competitive advantage U.S. export controls aim to protect.
Beyond commercial competition, Anthropic raised significant national security concerns. The company warned that models built through illicit distillation likely lack the rigorous safeguards U.S. labs implement against malicious uses like bioweapon development or cyber attacks. This risk is amplified given Anthropic's role in national security; the U.S. Department of Defense awarded it a prototype agreement with a $200 million ceiling to build frontier AI capabilities, with Claude being integrated into defense workflows via partners like Palantir.
Anthropic is calling for a coordinated response across the AI industry, cloud providers, and policymakers, advocating for enhanced technical defenses, industry standards, and clearer legal frameworks. The company stated, "The window to act is narrow," urging faster action as distillation campaigns grow in "intensity and sophistication."