Illustration by Nick Barclay / The Verge
Meta is restricting teens from viewing content that deals with topics like suicide, self-harm, and eating disorders, the company announced today. The content, which Meta says may not be “age appropriate” for young people, will not be visible even if it’s shared by someone a teen follows.
If a teen searches for this type of content on Facebook and Instagram, they’ll instead be directed toward “expert resources for help” like the National Alliance on Mental Illness, according to Meta. Teen users also may not know if content in these categories is shared and that they can’t see it. This change is rolling out to users under 18 over the coming months.
In addition to hiding content in sensitive categories, teen accounts will also be defaulted to restrictive filtering settings that tweak what kind of content on Facebook and Instagram they see. This change affects recommended posts in Search and Explore that could be “sensitive” or “low quality,” and Meta will automatically set all teen accounts to the most stringent settings, though these settings can be changed by users.
The sweeping updates come as Meta and other tech companies are under heightened government scrutiny over how they handle children on their platforms. In the US, Meta CEO Mark Zuckerberg — along with a roster of other tech executives — will testify before the Senate on child safety on January 31st. The hearing follows a wave of legislation across the country that attempts to restrict kids from accessing adult content.
Even beyond porn, lawmakers have signaled they are willing to age-gate large swaths of the internet in the name of protecting kids from certain (legal) content, like things that deal with suicide or eating disorders. For years, there have been reports about how teens’ feeds are flooded with harmful content, for example. But blocking all material besides what platforms deem trustworthy and acceptable could prevent young people from accessing other educational or support resources.
Meanwhile, in the EU, the Digital Services Act holds Big Tech companies including Meta and TikTok accountable for content shared to their platforms, with rules around algorithmic transparency and ad targeting. And the UK’s Online Safety Act, which became law in October, now requires online platforms to comply with child safety rules or risk fines.
The Online Safety Act lays out a goal of making the UK “the safest place in the world to be online” — but the law has had vocal critics who say it could violate user privacy. The encrypted messaging app Signal, for example, has said it would leave the UK rather than collect more data that could jeopardize user privacy.