Elon Musk’s xAI has restricted its AI chatbot Grok’s image generation capabilities to paying subscribers only, following widespread condemnation over its use to create non-consensual sexualized images of real women and children.
However, experts, regulators, and victims say that the new restrictions aren’t a solution to the now widespread problem.
“The argument that providing user details and payment methods will help identify perpetrators also isn’t convincing, given how easy it is to provide false info and use temporary payment methods,” Henry Ajder, a UK-based deepfakes expert, told Fortune. “The logic here is also reactive: it is supposed to help identify offenders after content has been generated, but it doesn’t represent any alignment or meaningful limitations to the model itself.”
“It is time for X to grip this issue; if another media company had billboards in town centers showing unlawful images, it would act immediately to take them down or face public backlash,” they said.
A representative for X said they were “looking into” the new restrictions. xAI responded with the automated message: “Legacy Media Lies.”
Over the past week real women have been targeted at scale with users manipulating photos to remove clothing, place subjects in bikinis, or position them in sexually explicit scenarios without their consent. Some victims reported feeling violated and disturbed by the trend, with many saying their reports to X went unanswered and images remained live on the platform.
Researchers said the scale at which Grok was producing and sharing images was unprecedented as, unlike other AI bots, Grok essentially has a built-in distribution system in the X platform.
“Restricting it to the paid-only user shows that they’re going to double down on this, placing an undue burden on the victims to report to law enforcement and law enforcement to use their resources to track these people down,” Ashley St Clair said of the recent restrictions. “It’s also a money grab.”
St Clair told Fortune that many of the accounts targeting her were already verified users: “It’s not effective at all,” she said. “This is just in anticipation of more law enforcement inquiries regarding Grok image generation.”
Experts say the new restrictions may not satisfy regulators’ concerns: “This approach is a blunt instrument that doesn’t address the root of the problem with Grok’s alignment and likely won’t cut it with regulators,” Ajder said. “Limiting functionality to paying users will not stop the generation of this content; a month’s subscription is not a robust solution.”
The Council on American-Islamic Relations (CAIR) has also called for Grok to be blocked from generating “sexually explicit images of children and women, including prominent Muslim women.”
Riana Pfefferkorn of Stanford’s Institute for Human-Centered Artificial Intelligence previously told Fortune that liability surrounding AI-generated images is murky. “We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike,” she said. “From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here.”
Musk has previously stated that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” However, it remains unclear how accounts will be held accountable.



