The financial incentives are staggering. Fairplay found that top AI slop channels targeting children have earned over $4.25 million in annual revenue, with some creators openly advertising profits from “plotless, mesmerizing AI content.” The letter argued that no amount of policy will be enough until the platform removes the financial incentive for creators of these videos.
The coalition draws on child development research to argue this isn’t a niche concern. Even adults can have trouble correctly identifying AI-generated content, getting it right only about 50% of the time. More troubling, repeated exposure makes people more likely to perceive AI imagery as real, even after being told it’s fake. For young children whose brains are still building foundational schemas of reality, the damage compounds over time.
Fairplay’s asks are structural, not cosmetic. The coalition is calling on YouTube to clearly label all AI-generated content across the platform, ban AI-generated content entirely from YouTube Kids, and prohibit AI-generated “Made for Kids” content on the main YouTube platform. Fairplay wants YouTube to bar its algorithm from recommending AI content to users under 18, introduce a parental toggle to disable AI content that is switched off by default, and halt all investment in AI-generated content targeting children.
That last demand takes direct aim at YouTube’s investment in Animaj, an AI-powered children’s entertainment studio backed by Google AI Futures. “YouTube is essentially investing in harming babies through its purchase of Animaj,” Franz said.
Bullwinkle also noted that the 15 channels mentioned in the Times article are not on YouTube Kids and that the platform removed videos that violated its Child Safety policies. But for Franz, that’s not good enough.
“It shouldn’t be up to individual researchers to point out a few channels as examples that are doing things that could potentially harm kids, and have that be the basis for what YouTube decides to kick off the platform. What we saw with Elsagate was that at that time, YouTube removed 150,000 videos from its platform and several hundred different channels,” Franz said. She was referencing Elsagate, a 2017 scandal in which thousands of videos on YouTube and YouTube Kids used familiar children’s characters, like Elsa from Frozen and Peppa Pig, to hide deeply disturbing content including graphic violence, sexual themes, and drug use, all dressed up with algorithm-friendly tags like “education” and “fun” to slip past filters and reach young children.
“So we know that YouTube has the capacity to monitor, track, and remove these videos at scale, but right now, they’re doing a band-aid approach, where the channels that are getting press coverage, it seems like those are the ones they’re going forward doing something about,” Franz continued. “But it’s not fixing the overall problem.”



