Cuban called the move reckless and said parents will abandon ChatGPT the second they believe their kids could bypass the company’s age-verification system to access inappropriate content.
In other words: if there’s any possibility that minors can access explicit content—including content generated by AI—parents and school districts will lock it out before testing the safety features, making it an unsavvy business strategy.
Altman, however, argued in his original post announcing the change that ChatGPT has been “restrictive” and “less enjoyable” since the company restricted the voice of its signature chatbot in response to criticism it was leading to mental health issues. He added that the upcoming update will allow a product that “behaves more like what people liked about 4.o.”
Cuban emphasized repeatedly in further posts that the controversy isn’t about adults accessing erotica. It’s about kids forming emotional relationships with AI without their parents’ knowledge, and those relationships potentially going sideways.
“Well, we haven’t put a sex bot avatar in ChatGPT yet,” Altman said.
OpenAI did not immediately respond to Fortune’s request for comment.
“This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices,” the lawsuit stated.
“Instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots,” Garcia testified. She accused the company of designing AI systems to appear emotionally human “to gain his trust and keep him endlessly engaged.”
She wasn’t the only parent to testify. Another mother from Texas, speaking anonymously as ‘Ms. Jane Doe,’ told lawmakers that her teenage son’s mental health collapsed after months of late-night conversations with similar chatbots. She said he is now in residential treatment.
“Parents today are afraid of books in libraries,” Cuban wrote. “They ain’t seen nothing yet.”