rotating globe
10 Feb 2026


OpenAI Warns of Suicidal Spike on ChatGPT

The company emphasized that these figures are preliminary and difficult to measure with precision

OpenAI has revealed that more than one million ChatGPT users each week send messages indicating potential suicidal thoughts or planning, marking the company’s most direct acknowledgment yet of the mental health risks tied to AI chatbot use.

The disclosure came in a blog post released Monday, outlining how the company’s systems detect and respond to sensitive mental health conversations.

OpenAI said its analysis also found that roughly 0.07% of active users—about 560,000 out of an estimated 800 million weekly users—display possible signs of mental health emergencies, including psychosis or manic episodes. The company emphasized that these figures are preliminary and difficult to measure with precision.

The update arrives as OpenAI faces mounting public and regulatory scrutiny. The family of a teenager who died by suicide has filed a high-profile lawsuit alleging that extensive interactions with ChatGPT contributed to his death.

Meanwhile, the Federal Trade Commission has launched a sweeping investigation into AI chatbot developers, examining how they assess potential harms to children and adolescents.

According to OpenAI, the latest version of its flagship model, GPT-5, has shown measurable improvements in handling self-harm and suicide-related conversations.

The company said internal evaluations rated GPT-5 at 91% compliance with its “desired behaviors,” up from 77% in the previous version. OpenAI added that GPT-5 also provides more direct access to crisis helplines and introduces reminders encouraging users to take breaks during extended sessions.

To enhance the system’s safety, OpenAI enlisted 170 medical professionals—psychiatrists, psychologists, and other clinicians—from its Global Physician Network to review and rate the chatbot’s responses in over 1,800 serious mental health scenarios.

The clinicians helped refine both model training and scripted responses, the company said.

OpenAI’s post framed the issue as an inevitable byproduct of its global user base, rather than a direct consequence of its technology. “Emotional distress and mental health symptoms are universally present,” the company wrote. “As our user base grows, some conversations will naturally reflect these realities.”

Still, mental health experts have long warned that AI chatbots can reinforce users’ harmful thoughts through “sycophantic” responses—agreeing with users’ emotions or delusions rather than challenging them.

Advocates say the findings underscore the need for stronger safety protocols and clearer accountability mechanisms.

OpenAI CEO Sam Altman recently suggested that the company is now confident enough in its safeguards to relax some of ChatGPT’s stricter content limits. In a post on X earlier this month, he said, “We made ChatGPT pretty restrictive to ensure safety around mental health. Now that we’ve mitigated the biggest risks, we can begin easing those restrictions.”

Also Read: SC Hauls Up States Over Stray Dog Menace