Meta has failed to protect teens online, whistleblower claims

8 Nov 2023

Image: © wichayada/Stock.adobe.com

Arturo Béjar claims Meta is aware of the harm its platforms can cause to young people and that policies fail to address these issues.

Another Meta whistleblower has spoken out against the company’s practices, with claims that the tech giant is aware of the harm teenagers face on its platforms but has failed to act.

This whistleblower – Arturo Béjar – claims that Meta has opted to give users “placebo” tools that fail to address issues such as teenagers seeing harmful content, having their mental health impacted and receiving “unwanted sexual advances” on Instagram.

“Meta continues to publicly misrepresent the level and frequency of harm that users, especially children, experience on the platform,” Béjar said.

“It is unacceptable that a 13-year-old girl gets propositioned on social media. Unfortunately it happens all too frequently today.”

Béjar is a former director of engineering for Meta – then called Facebook – who left in 2015. Béjar said he worked to “reduce online threats” while at the company and felt the “work was going in the right direction” when he left.

Speaking to a US senate judiciary committee yesterday (7 November), Béjar said he returned to the company in 2019 as a consultant to support Instagram’s wellbeing team. He added that one reason for this decision was that his own teenage daughter suffered abuse on Instagram, including unwanted sexual advances and harassment.

“She reported these incidents to the company and it did nothing,” Béjar said.

Failure to address the issues

Béjar claims that Meta was focused on enforcing its own “narrowly-defined policies” while he was a consultant, regardless of whether these measures protected teens.

“I discovered that most of the tools for kids that I had put in place during my earlier time at Facebook had been removed,” Béjar said. “I observed new features being developed in response to public outcry, which were in reality kind of a placebo, a safety feature in name only.”

Béjar said that a survey conducted by Instagram in 2021 found that roughly 12.5pc of children aged 13 to 15 years old experienced “unwanted sexual advances in the last seven days”.

“This is unacceptable and my work shows it doesn’t need to be this way,” Béjar said. “Meta must be held accountable for their recommendations and the unwanted sexual advances that Instagram enables.

Béjar claims he sent an email to Meta CEO Mark Zuckerberg and other executives in 2021 about the risks that young people are facing. He added that Meta knows the harm that children experience on its platforms and that their actions “fail to address it”.

“Social media companies must be required to become more transparent so that parents and public can hold them accountable,” Béjar said. “Its time the public and parents understand the true level of harm enabled by these products and its time for congress to act.”

In a statement sent to AP and other news sites, Meta said that “countless people” inside and outside of Meta are working on how to keep young people safe online every day.

“Working with parents and experts, we have also introduced more than 30 tools to support teens and their families in having safe, positive experiences online,” Meta said. “All of this work continues.”

On the same day that Béjar testified before the US committee, Meta announced that it joined Lantern, a tech coalition that aims to let companies share signals about accounts and behaviors that violate their child safety policies.

Previous concerns

This is not the first time that concerns have been raised over the impact that Meta’s products can have on young people. In 2021, whistleblower Frances Haugen shared internal research from the company, which became known as the Facebook Files.

One article about these files claimed that Meta had internal research that showed Instagram to be damaging to the mental health and wellbeing of teenage girls.

In June, a Stanford Internet Observatory report listed Instagram as the “primary platform” for the spreading of child sexual abuse material (CSAM) online. The report claimed Instagram recommendation algorithms “effectively advertise” CSAM content and that the site connects large paedophile networks.

At the time, a Meta spokesperson told SiliconRepublic.com that the company set up an internal task force to investigate these claims and “immediately address them”.

Last month, a coalition of 33 US attorneys general filed a federal lawsuit against Meta, accusing the tech giant of harmful actions against children and teenagers. California attorney general Rob Bonta said an investigation found that Meta has been getting young people addicted to its platforms for “corporate profits”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com