rotating globe
28 Feb 2026


Instagram expands teen safety tools

New safety feature rolls out amid rising scrutiny over social media and youth mental health

Instagram has introduced a new feature aimed at helping parents spot potential emotional distress in teenagers by alerting them when their child conducts repeated searches for terms linked to suicide or self-harm on the platform. The move reflects growing concern about the intersection of young people’s mental health and social media use.

Under the new system, parents who have enabled Instagram’s parental supervision tools will receive notifications if their teen enters multiple search queries that include keywords associated with self-injury or suicidal thoughts over a short period. The alerts are designed to highlight patterns of worrying behaviour rather than single, isolated searches; Instagram says it will only flag activity that meets a threshold indicating possible risk.

Once an alert is issued, parents will see contextual information and links to third-party mental health resources intended to help them talk to their child about what they’re experiencing. Notifications can be delivered via Instagram itself or through other methods such as email or text, depending on the parent’s preferences.

This addition builds on existing protections: Instagram already blocks content that appears to promote self-harm and directs users who make certain searches to support resources. The new alert function goes a step further by notifying caregivers outside the platform if concerning patterns emerge.

Initially, the update is launching in several countries, with wider availability slated throughout the year. Meta says it is also exploring future expansions that could include alerts based on interactions with AI tools around similar topics.

Child safety advocates have offered mixed feedback. Some welcome the greater transparency and early intervention potential. Others are wary, suggesting that giving parents notification responsibility may not be sufficient without broader changes to how social media platforms handle sensitive content.

The rollout occurs against a backdrop of increased scrutiny of social media’s mental health impact on young users. Meta continues to face pressure from parents, lawmakers and mental health groups to introduce stronger safeguards, even as it maintains that the relationship between social media and teen wellbeing is multifaceted and not fully understood.

Also Read: Trump orders halt to Anthropic AI in federal agencies