AI startup Character.AI is cutting off young people’s access to its virtual characters after several lawsuits accused the company of endangering children. The company announced on Wednesday that it would remove the ability for users under 18 to engage in “open-ended” chats with AI personas on its platform, with the update taking effect by November 25.
The company also said it was launching a new age assurance system to help verify users’ ages and group them into the correct age brackets.
“Between now and then, we will be working to build an under-18 experience that still gives our teen users ways to be creative—for example, by creating videos, stories, and streams with Characters,” the company said in a statement shared with Fortune. “During this transition period, we will also limit chat time for users under 18. The limit initially will be two hours per day and will ramp down in the coming weeks before November 25.”
Character.AI said the change was made in response, at least in part, to regulatory scrutiny, citing inquiries from regulators about the content teens may encounter when chatting with AI characters. The FTC is currently probing seven companies—including OpenAI and Character.AI—to better understand how their chatbots affect children. The company is also facing several lawsuits related to young users, including at least one connected to a teenager’s suicide.
In a statement shared with Fortune, Meetali Jain, executive director of the Tech Justice Law Project and a lawyer representing several plaintiffs suing Character.AI, welcomed the move as a “good first step” but questioned how the policy would be implemented.
“They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy-preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created,” Jain said.
“Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies—not just for children, but also for people over 18. We need more action from lawmakers, regulators, and regular people who, by sharing their stories of personal harm, help combat tech companies’ narrative that their products are inevitable and beneficial to all as is,” she added.
Character.AI is not alone in facing scrutiny over teen safety and AI chatbot behavior.
Earlier this year, internal documents obtained by Reuters suggested that Meta’s AI chatbot could, under company guidelines, engage in “romantic or sensual” conversations with children and even comment on their attractiveness.
A Meta spokesperson previously told Fortune that the examples reported by Reuters were inaccurate and have since been removed. Meta has also introduced new parental controls that will allow parents to block their children from chatting with AI characters on Facebook, Instagram, and the Meta AI app. The new safeguards, rolling out early next year in the U.S., U.K., Canada, and Australia, will also let parents block specific bots and view summaries of the topics their teens discuss with AI.



