Whistleblowers warn of social media algorithm risks

Date:

Insiders say safety compromised in tech rivalry

Social media companies TikTok and Meta took risks with user safety while competing for attention in an algorithm-driven race for engagement, whistleblowers and insiders have told the BBC.

More than a dozen sources described how decisions inside the companies allowed more harmful material to appear in users’ feeds. The concerns include content linked to violence, sexual blackmail, extremism, and harassment.

The allegations are detailed in the BBC documentary Inside the Rage Machine, which examines how the rapid rise of TikTok transformed the social media landscape and pushed rival platforms to change how their algorithms recommend content.

Competition intensified the algorithm race

TikTok’s success with short-form videos powered by an extremely effective recommendation system forced competitors to react quickly.

Meta, the company behind Facebook and Instagram, launched Instagram Reels in 2020 in an attempt to compete with TikTok’s fast-growing popularity during the Covid pandemic.

Former Meta researcher Matt Motyl said the company moved rapidly to develop the product but without sufficient safeguards.

Internal research indicated that comments on Reels posts showed significantly higher levels of harmful behaviour compared with other parts of Instagram. The data suggested bullying and harassment were about 75% more common, hate speech 19% higher, and violent or inciting comments around 7% higher.

Motyl said the push to compete created a tension between protecting users and maximising engagement.

“Meta’s products are used by north of three billion people,” he said. “The more time they can keep you on the platform, the more ads they sell and the more money they make. But when they get this wrong, really bad things happen.”

Claims Meta allowed more borderline content

A former engineer at Meta said that internal priorities shifted as the company tried to catch up with TikTok.

According to the engineer, teams that had been working to limit “borderline” harmful content were told to loosen restrictions. Borderline material refers to posts that may be offensive or harmful but are not illegal, including misogynistic, racist, or conspiracy-based content.

The engineer said management signalled that the change was driven by business concerns as the company faced competitive pressure.

“You’re losing to TikTok and therefore your stock price must suffer,” the engineer said. “People started becoming reactive and asking where they could gain even small increases in engagement.”

Internal documents also showed the company knew that emotionally charged posts — particularly those that provoke anger or outrage — generate stronger engagement. According to one internal study, this created incentives that did not always align with Meta’s mission to bring people closer together.

TikTok moderation concerns raised

Separate allegations from a TikTok trust and safety employee raise questions about how the platform prioritises moderation decisions.

The whistleblower, who the BBC calls Nick, shared access to internal dashboards used by staff to track complaints about harmful content.

Nick said his team often struggled to keep up with the volume of reports, leaving some serious cases unresolved. In his view, teenagers and children were particularly vulnerable.

He also claimed that cases involving politicians sometimes received higher priority than reports involving young users experiencing abuse or exploitation.

In one example shown to the BBC, a complaint from a political figure who had been mocked online was reviewed before cases involving a teenager reporting cyberbullying and another minor whose images were allegedly being shared without consent.

Nick said staff were instructed to follow the ranking system even when they believed other cases should be prioritised.

He argued that maintaining good relationships with politicians and governments was sometimes seen as strategically important to avoid regulatory pressure or bans.

“If you’re feeling guilty on a daily basis because of what you’re instructed to do, at some point you have to ask whether you should speak out,” he said.

Harmful content and radicalisation concerns

Experts and insiders also raised concerns about the broader impact of algorithm-driven feeds.

Machine-learning engineer Ruofan Ding, who worked on TikTok’s recommendation system between 2020 and 2024, said the systems are extremely complex and difficult to fully control.

“The algorithm itself is a kind of black box,” he said, explaining that engineers focus mainly on optimising engagement signals rather than the meaning of individual posts.

Content moderation teams are expected to remove harmful material so it cannot be recommended by the algorithm, he said, comparing the relationship to different teams responsible for parts of a car.

But Ding said he began noticing more borderline or problematic content appearing in recommendations as the algorithm was continuously updated to improve user engagement and expand market share.

Some teenagers told the BBC that tools meant to reduce unwanted content were ineffective, saying they continued to be shown violent or hateful posts.

One teenager, Calum, said he felt he had been “radicalised by algorithm” after being repeatedly exposed to extreme content from the age of 14.

The videos, he said, encouraged anger and pushed him toward racist and misogynistic views before he later recognised the influence they had on him.

Safety teams struggled for resources

Former employees also described internal tensions over resources dedicated to safety.

Motyl said Meta invested heavily in expanding Instagram Reels while requests from safety teams for additional staff were sometimes rejected.

Another former employee told the BBC that proposals for specialist roles focused on protecting children and safeguarding election integrity were denied while hundreds of staff were hired to grow Reels.

Brandon Silverman, whose analytics company Crowdtangle was acquired by Facebook, said Meta’s leadership was highly focused on competition.

“When there are competitive threats, there’s effectively no limit to the investment they are willing to make,” he said.

But he said debates about harmful content sometimes shifted from reflection to defensiveness, with the company arguing that it could not be held responsible for wider societal polarisation.

Companies reject allegations

Both companies strongly reject the claims made by whistleblowers.

Meta said suggestions that it deliberately amplifies harmful content for financial gain are incorrect. A spokesperson said the company has strict policies designed to protect users and has invested heavily in safety and security measures over the past decade.

TikTok said the whistleblower allegations were fabricated and misrepresented how its moderation systems operate.

The company said it uses technology to prevent harmful content from being viewed and maintains strict recommendation policies. It also said teen accounts have more than 50 preset safety features enabled by default.

TikTok added that specialised moderation workflows do not reduce the priority given to child safety cases, which are handled by dedicated teams.

Despite these assurances, critics say the whistleblowers’ accounts offer a rare insight into how decisions inside major technology companies shape the content billions of people see every day.

Source: BBC


Also read: The tech firms embracing a 72-hour working week
For more videos and updates, check out our YouTube channel

Share post:

Popular

More like this
Related

ON THIS DAY: Jimmy Carter calls for a homeland for the Palestinians (1977)

On 16 March 1977, newly inaugurated United States President...

Govt announces €28 million support package for livestock farmers

The total support package for livestock farmers has reached...

Global push for human-made certification grows

Rise of “AI-free” labels Companies and non-profits worldwide are developing...

€150,000 cash seized at Agios Dometios checkpoint

Authorities intercept undeclared tobacco and cash Cypriot authorities arrested two...