Meta’s updated teen safety policies don’t go far enough says whistleblower

11 Jan 2024

Image: © fizkes/Stock.adobe.com

Meta whistleblower Arturo Béjar said there was still no way for a teen to flag an unwanted advance on Instagram or Facebook.

Earlier this week, Meta introduced a set of new policies aimed at safeguarding teen users on its platforms, Facebook and Instagram. These updates come in under the shadow of a massive lawsuit against Meta and other tech giants such as TikTok owner ByteDance and Google’s YouTube in relation to child safety on their platforms. However a Meta whistleblower has said the updates don’t address 99pc of the harmful content on the platforms.

In a nutshell, the complex case includes accusations that the tech companies have intentionally designed their platforms to be addictive – and minors are especially at risk. Lack of parental controls, lack of age verification and potentially harmful algorithms are other concerns being raised in the lawsuit.

Whistleblower Arturo Béjar

There was quite a lot of focus on Meta’s part in all of this around November 2023 when a whistleblower spoke to a US senate judiciary committee about his time at the company. Arturo Béjar, who worked for Meta as a consultant to support Instagram’s wellbeing team around 2019, claimed that the company was fully aware of the harm its platforms are causing young users and their policies do little to prevent that harm.

He accused his former employer of continuing to “publicly misrepresent the level and frequency of harm that users, especially children, experience on the platform”. He said that the harm included minors receiving unwanted sexual advances. He criticised Meta’s child safety policies at the time for being ineffective, branding them as “placebo” tools.

His own teenage daughter was allegedly the victim of unwanted sexual advances via Instagram and when she reported this to the company, nothing was done. Béjar, who had also worked for Facebook prior to working at Instagram, claimed that many of the tools for child safety he had worked on during his time there had been removed.

Meta’s new safety measures for minors

The statement that Meta released on 9 January of this year to accompany its updated minors safety policy said, “We want teens to have safe, age-appropriate experiences on our apps.”

Among the new measures it has introduced is a commitment to hide more content on its platforms that could harm children. It said it would consult with experts on how to proceed and what types of content to limit for young people.

Meta also said it would be automatically placing all teen users of its platforms into the most restrictive content control settings. On Instagram, teens will be prompted to update their privacy settings, and some search terms – such as those related to suicide, self-harm and eating disorders – will be restricted on the platform.

Meta was keen to stress that it cares about user safety and always has done. “We’re starting to roll these changes out to teens under 18 now and they’ll be fully in place on Instagram and Facebook in the coming months.

“We’ve developed more than 30 tools and resources to support teens and their parents, and we’ve spent over a decade developing policies and technology to address content that breaks our rules or could be seen as sensitive.

“We regularly consult with experts in adolescent development, psychology and mental health to help make our platforms safe and age-appropriate for young people,” the company stated.

A failure to address the issue?

Commenting on Meta’s commitments, Béjar said today (11 January) that “These changes rely on the ‘grade your own homework’ definitions of harm, which does not address 99pc of the harmful content they recommend to teens.

“The harm that teens experience online will not be reduced until social media companies commit to publicly disclosing and setting goals to reduce the number of times teens experience harm on their products,” he said.

“As an engineering and product leader, I know this can be a straightforward fix. It should be as simple as: did you see self-harm content in the last seven days? When you see it, can you do something about it that helps you and helps the community?”

He also claimed that “there is still no way for a teen to easily flag an unwanted advance” on the platform. “It is as simple as a button.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Blathnaid O’Dea was a Careers reporter at Silicon Republic until 2024.

editorial@siliconrepublic.com