Anthropic's AI assistant Claude experienced a significant, widespread service disruption on March 2, 2026, locking thousands of users out of its primary web interface and sparking a productivity crisis for those reliant on the tool. The incident, first identified at 11:49 UTC, resulted in elevated error rates affecting claude.ai, the user console, and the Claude Code coding assistant across multiple regions including India, Europe, and parts of Africa.
Users reported encountering HTTP 500 errors, failed login attempts, frozen prompts, and complete site downtime. While the core Claude API remained operational, allowing some developer access, the primary web and app interfaces were severely impacted. The outage prompted a surge of reactions on social media, with users highlighting the tool's critical role in daily workflows. Notably, the disruption occurred amidst separate, isolated complaints about other AI platforms like ChatGPT and Gemini, though those services did not suffer systemic failures.
The timing of the outage has drawn additional commentary, coming just days after renewed scrutiny of Anthropic's government contracts. Reports indicate that U.S. federal agencies were ordered to stop using Anthropic's AI tools, affecting over $200 million in contracts, due to concerns over restrictions on the technology's use, particularly regarding military applications. CEO Dario Amodei has previously acknowledged pressure related to the company's stance on military use of its AI.
Anthropic, backed by major investors like Amazon and Alphabet, confirmed it was actively investigating the issue but did not immediately provide an estimated time for full resolution. Historically, similar Claude outages have been resolved within one to two hours. The company advised users to monitor its official status page for updates.