rotating globe
10 Feb 2026


OpenAI Rolls Out Age Verification and Teen Safety Features in ChatGPT


OpenAI has announced a new age verification system and safety features in ChatGPT designed specifically to protect teenage users. The initiative comes amid growing concerns about the risks AI tools may pose to minors, particularly around mental health and exposure to inappropriate content.

The company is developing an age prediction tool that estimates a user’s age based on their behavior. If the system is uncertain, it will default to a stricter, teen-friendly version of ChatGPT. In some regions, users may also be asked to verify their age through official ID, depending on local regulations.

“We’re prioritizing safety for teens, even if it means reducing some freedoms or collecting minimal personal information,” said OpenAI CEO Sam Altman. “When privacy, freedom, and safety conflict, we believe protecting teens must come first.”

Teen-Specific Experience

For users identified as between 13 and 17, ChatGPT will offer a modified experience with tighter content controls. This includes blocking sexually suggestive or flirtatious interactions, restricting fictional content involving self-harm or suicide, and limiting exposure to other potentially harmful topics—even in creative or roleplay settings.

These changes aim to reduce emotional risks for younger users while maintaining helpful, educational, and age-appropriate use of the AI tool.

Emergency and Parental Safeguards

In situations where a teen expresses suicidal thoughts or extreme emotional distress, OpenAI may attempt to contact a parent or guardian. If there is an immediate safety risk and no guardian is reachable, authorities may be notified. The company acknowledges this step reduces privacy but says it is necessary to protect lives.

New parental control tools are also on the way. These will allow parents to link accounts with their teen’s, set time limits, disable memory or chat history, and receive alerts if the AI detects serious emotional issues.

Broader Impact

These changes follow increased public scrutiny after reports that ChatGPT may have contributed to emotional distress in young users, including a lawsuit involving the suicide of a 16-year-old. OpenAI said it has worked with child safety experts and mental health professionals to design the new features.

Altman emphasized that while the system isn’t perfect, the company is committed to ongoing improvement and transparency.

Also Read: CAG to Launch AI-Powered Audit System and Digital Tools