“Social AI companions are not safe for kids,” CEO and founder James P. Steyer said in an announcement on Wednesday. “They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains.”
Because of that conclusion and the other revelations of Common Sense’s comprehensive review of how companion AI works, the organization warns, the platforms should not be used by anyone under the age of 18.
That survey also found that most parents are out of the loop when it comes to these technologies: Just 37% of parents whose teen reported using AI thought that their child had ever done so, it found. Meanwhile, almost half (49%) of parents say they’ve not talked about generative AI with their child, and 83% of parents say schools have never communicated with families about such platforms.
Key findings of the newly issued warning include:
“Given a litany of documented real-world harms, as well as the key findings listed above,” the report concludes, “Common Sense Media’s risk assessment rated social AI companions as ‘Unacceptable’ for minors based on the organization’s comprehensive AI Principles framework and risk assessment methodology, which evaluates technologies across factors including safety, fairness, trustworthiness, and potential for human connection.”
Its recommendations include that parents ensure there be no social AI companions for anyone under 18, that developers implement “robust age assurance beyond self-attestation,” that parents become well-versed in the technology and its risks, and that further research on the impacts take place.
“Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people,” said Steyer.
That finding has particular resonance, as Garcia’s son Sewell was drawn into an addictive, harmful technology with no protections in place, according to court documents. That allegedly led to an extreme personality shift in the boy, who came to prefer the bot over other real-life connections, despite what his mom says were “abusive and sexual interactions” that took place over a 10-month period. The boy committed suicide in February of 2024 after the bot told him, “Please come home to me as soon as possible, my love.”
Now, with the organization’s strong warning to parents, researchers are hoping to change that.
“Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics,” warned Vasan. “Until there are stronger safeguards, kids should not be using them. Period.”
More on teens and technology: