Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it’s facing a fresh set of issues.
For decades, tech giants have been shielded from similar lawsuits in the U.S. over harmful content by Section 230 of the Communications Decency Act, sometimes known as “the 26 words that made the internet.” The law protects platforms like Facebook or YouTube from legal claims over user content that appears on their platforms, treating the companies as neutral hosts—similar to telephone companies—rather than publishers. Courts have long reinforced this protection. For example, AOL dodged liability for defamatory posts in a 1997 court case, while Facebook avoided a terrorism-related lawsuit in 2020, by relying on the defense.
But while Section 230 has historically protected tech companies from liability for third-party content, legal experts say its applicability to AI-generated content is unclear and in some cases, unlikely.
“Section 230 was built to protect platforms from liability for what users say, not for what the platforms themselves generate. That means immunity often survives when AI is used in an extractive way—pulling quotes, snippets, or sources in the manner of a search engine or feed,” Chinmayi Sharma, associate professor at Fordham Law School, told Fortune. “Courts are comfortable treating that as hosting or curating third-party content. But transformer-based chatbots don’t just extract. They generate new, organic outputs personalized to a user’s prompt.
“That looks far less like neutral intermediation and far more like authored speech,” she said.
Section 230 protection is weaker when platforms actively shape content rather than just hosting it. While traditional failures to moderate third-party posts are usually protected, design choices, like building chatbots that produce harmful content, could expose companies to liability. Courts haven’t addressed this yet, with no rulings to date on whether AI-generated content is covered by Section 230, but legal experts said AI that causes serious harm, especially to minors, is unlikely to be fully shielded under the act.
Pete Furlong, lead policy researcher at the Center for Humane Technology, who worked on the case against Character.AI, said that the company hadn’t claimed a Section 230 defense in relation to the case of 14-year-old Sewell Setzer III, who died by suicide in February 2024.
“Character.AI has taken a number of different defenses to try to push back against this, but they have not claimed Section 230 as a defense in this case,” he told Fortune. “I think that that’s really important because it’s kind of a recognition by some of these companies that that’s probably not a valid defense in the case of AI chatbots.”
While he noted that this issue has not been settled definitively in a court of law, he said that the protections from Section 230 “almost certainly do not extend to AI-generated content.”
Amid increasing reports of real-world harms, some lawmakers have already tried to ensure that Section 230 cannot be used to shield AI platforms from responsibility.
“The general argument, given the policy considerations behind Section 230, is that courts have and will continue to extend Section 230 protections as far as possible to provide protection to platforms,” Collin R. Walke, an Oklahoma-based data-privacy lawyer, told Fortune. “Therefore, in anticipation of that, Hawley proposed his bill. For example, some courts have said that so long as the algorithm is ‘content neutral,’ then the company is not responsible for the information output based upon the user input.”
Courts have previously ruled that algorithms that simply organize or match user content without altering it are considered “content neutral,” and platforms aren’t treated as the creators of that content. By this reasoning, an AI platform whose algorithm produces outputs based solely on neutral processing of user inputs might also avoid liability for what users see.
“From a pure textual standpoint, AI platforms should not receive Section 230 protection because the content is generated by the platform itself. Yes, code actually determines what information gets communicated back to the user, but it’s still the platform’s code and product—not a third party’s,” Walke said.
A version of this story was originally published on Oct. 3, 2025.