Last year, Instagram unveiled teen accounts a day before a key House committee was scheduled to weigh amendments to the Kids Online Safety Act, which would have created a new obligation for companies to mitigate potential harms to children. The measure passed in the Senate but stalled in the House.
The new features are the latest in a steady drip of teen safety tweaks the app has rolled out as parents, researchers, and lawmakers urge its parent company, Meta, to stop serving dangerous or inappropriate content to young people.
The new system will filter even more content depicting violence, substance use, and dangerous stunts from teenagers’ feeds, the company said.
“Our responsibility is to maximise positive experiences and minimise negative experiences,” Instagram chief executive Adam Mosseri said on the Today show, discussing the tension between keeping teenagers engaged on the app and shielding them from harmful content and experiences.
Advocates for children’s online safety, however, urged parents to remain sceptical.
“We don’t know if [the updates] will actually work and create an environment that is safe for kids,” said Sarah Gardner, chief executive of tech advocacy organisation Heat Initiative.
Based on a user’s self-reported age as well as age-detection technology that examines a user’s in-app behaviour, Instagram says it automatically puts people between age 13 and 17 into teen accounts with the accompanying guardrails.
Parents can use Meta’s parental controls to link their accounts with their teen’s and opt for settings that are more or less restrictive. With parental permission, 16- and 17-year-olds can opt out of some teen account restrictions.
Instagram, originally an app for sharing photos with friends, has increasingly shown content from non-friends as it competes with TikTok, YouTube and Twitch for teenagers’ time.
Along the way, it has come under fire for showing young people content promoting suicide and self-harm.
Beginning with a “sensitive content” filter in 2021, Instagram has introduced a series of features it says are designed to limit potentially harmful posts and protect teens from bullying and predation.
Last year, it launched “teen accounts” that come with automatic restrictions on recommended content as well as friend requests and direct messages.
A report earlier this year from Gen Z-led tech advocacy organisation Design It For Us showed that even when using teen accounts, users were shown posts depicting sex acts and promoting disordered eating.
When my colleague Geoffrey Fowler tested it in May, he found the app repeatedly recommended posts about binge drinking, drug paraphernalia, and nicotine products to a teen account. Meta at the time said that the posts in question were outliers and that most were “unobjectionable”.
Other close looks at the efficacy of teen account protections had similar findings.
A September report from Meta whistleblower Arturo Bejar alongside a group of academics and tech advocacy organisations found that teen accounts were still able to send “grossly offensive and misogynistic comments” and view posts describing “demeaning sexual acts”.
Meta has vehemently denied the report’s findings, with spokesman Andy Stone calling it a “highly subjective, misleading assessment that repeatedly misrepresents our efforts and misstates how our safety tools work”.
“There is no reason to trust that Instagram’s promised changes will actually make the product safe for teens: Nearly two-thirds of Instagram’s promoted safety tools for teens were ineffective or non-existent,” said Josh Golin, executive director of children’s advocacy organisation Fairplay, citing the report’s findings.
As Instagram fields a fresh wave of pushback from critics, it is also experimenting with AI-powered chatbots that users, including teenagers, can talk with.
A report in August from family advocacy group Common Sense Media found that the bot was coaching teen accounts on suicide and self-harm.
When Fowler tested Instagram’s chatbot, he found it willing to offer advice on disordered eating. Meta responded that the bot’s behaviour was violating its policies and that it planned to investigate.
Design It For Us co-chair Zamaan Qureshi said that rather than taking responsibility for what teens encounter on the platform, Meta is shifting responsibility to parents to both flag inappropriate content and double-check what is slipping through the filters.
Furthermore, Qureshi said, it is hard to take Meta’s series of safety updates at face value because the company doesn’t share data showing whether past updates have been effective at making the app safer for teens.
“They’re a very sophisticated company, so they’re fully capable of doing this kind of research,” he said.
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.