OpenAI revealed on Monday that approximately 1.2 million users, or 0.15% of its 800 million weekly ChatGPT users, discuss suicide with the AI chatbot each week. This includes nearly 400,000 users who exhibit explicit suicidal intentions, while another 560,000 show signs of psychosis or mania weekly, and 1.2 million display heightened emotional attachment to the chatbot.
The company reported that GPT-5 now achieves 91% compliance in suicide-related scenarios, up from 77% in earlier models, but admitted that safeguards become less effective in longer conversations. Former safety researcher Steven Adler criticized the lack of evidence for improvements, highlighting that OpenAI's own classifiers would have flagged over 80% of problematic responses in a case study. The company faces a wrongful death lawsuit from the parents of a 16-year-old and legal scrutiny over its handling of mental health crises.
In a live AMA on Tuesday, CEO Sam Altman apologized for mishandling the GPT-4o to GPT-5 upgrade, acknowledging user backlash over safety filters and communication. He pledged to introduce an "adult mode" for verified adults, relaxing content limits while maintaining protections for minors and users in distress. Altman emphasized greater user control and customization, with plans to allow erotic content generation starting in December, though he admitted using erotica as an example was a mistake.
Altman also unveiled sweeping changes, including the new OpenAI Foundation, which controls the for-profit group and will channel roughly $130 billion in equity toward scientific and humanitarian projects. The partnership with Microsoft was extended through 2032, valued at $135 billion, and a $1.4 trillion computing build-out, dubbed "Stargate," aims to produce a gigawatt of AI compute weekly. He projected that AI models could make small scientific discoveries by 2026 and larger ones by 2028, evolving ChatGPT into a broader platform for innovation.