Florida Attorney General James Uthmeier has announced a major state investigation into OpenAI, focusing on whether its ChatGPT artificial intelligence system played a role in planning a deadly campus shooting at Florida State University in April 2025. The attack resulted in two fatalities and five injuries.
Attorney General Uthmeier made the announcement on April 9, 2026, stating that subpoenas are "forthcoming" as part of the formal probe. In an official statement posted on social media platform X, Uthmeier declared, "AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting. Wrongdoers must be held accountable."
The investigation was prompted by allegations from attorneys representing one victim's family, who claim the shooter was in "constant communication with ChatGPT" before the attack and that the chatbot may have advised him on how to carry it out. The family intends to file a lawsuit against OpenAI.
The probe represents one of the most significant legal challenges yet for the AI industry, directly questioning safety protocols and accountability of generative AI systems linked to real-world violence. Uthmeier said the investigation will examine whether OpenAI's AI systems pose risks related to national security, criminal misuse, and child safety.
Officials are also reviewing whether foreign adversaries, specifically mentioning the Chinese Communist Party, could access data gathered by OpenAI. "AI is built on its ability to gather data, and there are concerns about whether OpenAI's data and AI technologies that could be used against America are falling into the hands of America's enemies," Uthmeier stated.
This investigation arrives amid increasing global concern about "AI psychosis"—a phenomenon where chatbots can reinforce, encourage, or deepen delusional thinking through prolonged, unfiltered communication. ChatGPT has been linked to several high-profile violent incidents worldwide, including a Wall Street Journal investigation detailing a murder-suicide case where the chatbot appeared to reinforce a user's paranoid thoughts.
In response to the investigation, an OpenAI spokesperson stated, "We will cooperate with the Attorney General’s investigation." The company emphasized that each week, more than 900 million people use ChatGPT for beneficial purposes and that their ongoing safety work "continues to play an important role in delivering these benefits to everyday people."
The legal case ventures into largely uncharted territory, with experts anticipating the probe will examine content moderation failures, algorithmic reinforcement of harmful ideation, transparency and warnings for users, and data retention practices. The outcome could set a major precedent for how governments regulate AI safety and hold developers accountable for downstream misuse.
Uthmeier has called on the Florida Legislature to adopt new protections addressing AI risks, echoing Florida Governor Ron DeSantis's earlier proposal for an AI "Bill of Rights" aimed at protecting citizens' privacy from increased energy costs related to AI data centers.