In an internet where you’re more likely to interact with bots than actual humans online, while children become more technologically savvy everyday and can navigate phones better than they can bikes, social media platforms are looking for ways to balance keeping people’s privacy top of mind while ensuring the safety of their underage users. Unfortunately, these two parameters often come in contradiction with one another, and the lack of government oversight means there’s little incentive for these companies to pursue anything more than keeping the status quo.
“Privacy can sometimes be two sides of a coin,” said Johnny Ayers, the CEO and founder of the AI-powered identification software company Socure. “There is a very dangerous naivety that [comes with] identity fraud, liveness, deep fake detection.”
“You can’t collect biometrics on a kid,” he told Fortune. “And so how do you verify someone is 13 without verifying, without collecting a thing, that they’re 13.”
The FTC is calling this policy change a move in the right direction, but psychologists and privacy experts alike warn it’s allowing companies to overreach in data collection, underscoring any pseudo-privacy measures, and the damage to children has already been done.
“These platforms were developed for adults. They were developed for adults, but kids are on them. It was never purposeful, like, what’s the product for kids? It was an afterthought, which then means we’re trying to plug holes,” Debra Boeldt, a generative AI psychologist at the family online safety company Aura, told Fortune. “A lot of these companies right now are trying to help, but don’t have the resources to put towards it, or the evidence-based, trained individuals to think about it and plan for it.”
She oversees the clinical research team at Aura, an online safety solution for individuals and families to protect their identities—and that of their children’s—in an increasingly digital landscape. The company uses AI to monitor families’ online activities and can even recognize keyboard inputs to denote if a child is using a harmful language or platform.
Kids are playing digital whack-a-mole
Efforts by social media companies to remove children from their platforms will prove difficult, simply because they know how to get around them.
“This is just their normal space, where they connect,” Boeldt said, adding any attempts are “going to be kind of like whack a mole,” in which underage users will simply move on to the next platform.
“These alerts are designed to make sure parents are aware if their teen is repeatedly trying to search for this content, and to give them the resources they need to support their teen,” the company said in a release.
However, kids already get around censors on social media platforms like TikTok and Instagram, using words like “unalive” or referring to the “PDF files” to mean other, more sinister objects.
This poses a problem, Boeldt said, as any attempt to stop children from using certain terms will just invent and breed a new set of vocabulary that in turn will then force a new set of attempts to monitor that language, inevitably becoming a never-ending cycle.
“When I saw this stuff on Instagram and self harm, my brain immediately goes, ‘how good is their model? How well are they going to be detecting this?’” he added.
Boeldt believes government regulation is the only way to truly force companies to ensure the safety of their users online. “These companies aren’t held to a certain standard” that would stop children from accessing their platforms—not least of all, something these companies “benefit from with kids on their platform. More people, more ads.”
“At the end of the day, that actually takes a lot of money and resources to do this.”



