Instagram to warn parents if teens search self-harm content

Date:

Instagram will soon notify parents if their teenage children repeatedly search for suicide or self-harm related terms on the platform. The alerts will be part of Meta’s child supervision tools, marking the first time the company has proactively informed parents about such searches rather than simply blocking them.

Rollout across countries

The new notifications will initially be available in the UK, US, Australia and Canada for families using Instagram’s Teen Accounts experience. Meta plans to extend the alerts globally in the coming months.

Criticism from suicide prevention experts

The Molly Rose Foundation, a suicide prevention charity, criticised the move, warning it “could do more harm than good”. The foundation was established by the family of Molly Russell, who died by suicide at 14 after viewing harmful content on Instagram and other platforms.

Chief executive Andy Burrows said, “Every parent would want to know if their child is struggling, but these notifications could leave parents panicked and ill-prepared for sensitive conversations.”

Support for parents

Meta said the alerts will be accompanied by expert guidance to help parents navigate difficult discussions with their children. Sameer Hinduja, co-director of the Cyberbullying Research Center, said that while the alerts may be alarming, the quality of accompanying resources is critical.

“You can’t drop a notification on a parent and leave them on their own,” Hinduja told the BBC.

Continued risks on the platform

Burrows noted research indicating that Instagram still recommends harmful content on depression, suicide and self-harm to vulnerable young users. He argued that the company should focus on addressing these risks rather than shifting responsibility to parents. Meta, however, disputes these claims, saying they misrepresent its efforts to protect teens.

How alerts will work

Instagram’s Teen Account alerts are designed to highlight sudden changes in a teen’s search behaviour. Alerts will be sent via email, text, WhatsApp, or directly through Instagram, depending on available contact information. Meta said the system may occasionally notify parents even when no real risk exists, prioritising caution.

Looking ahead, the company plans to apply similar alerts for teen interactions with AI chatbots on Instagram, reflecting the increasing use of AI for support among young users.

Growing regulatory scrutiny

Social media platforms are under mounting pressure globally to protect children. Australia has banned social media for under-16s, while Spain, France, and the UK are considering similar measures. Regulators and lawmakers are closely reviewing tech companies’ business practices regarding young users.

Instagram’s update comes as Meta faces legal and public scrutiny, with CEO Mark Zuckerberg and Instagram chief Adam Mosseri recently defending the company in US court over allegations of targeting younger users.

Source: BBC


Also read: North Korea nuclear expansion signalled by Kim Jong Un
For more videos and updates, check out our YouTube channel

Share post:

Popular

More like this
Related

Israel and US launch strikes on Iran

Israel launched a pre-emptive attack against Iran on Saturday,...

On this day: Leonard Nimoy remembered (2015)

On February 27, 2015, the world lost Leonard Nimoy,...

Small changes that make your home feel calm

In an age of constant notifications, crowded schedules and...

Video shows animals examined via back door at diagnostic centre

A serious claim has emerged regarding a GESY-affiliated diagnostic...