Still, the incident, in which Moreno-Gama also went to OpenAI’s headquarters and tried to shatter the building’s glass doors with a chair and threatened to burn the facility, surfaced his activity on Pause AI’s Discord server and renewed scrutiny of Stop AI’s direct actions targeting OpenAI last year.
Elmore told Fortune that she had been on her way to Washington, D.C., last week to finish preparing for a peaceful demonstration on Capitol Hill and meetings with members of Congress when the attempted firebombing occurred. “When I landed, suddenly I was getting these questions about somebody who had attacked Sam Altman’s house,” she said. “It’s been back and forth between working on something that I feel really proud and positive about, and it’s just exactly the right kind of change to be making—democratic change through democratic means—and then having to comment on this horrible event and additionally being really smeared with a connection to this event.”
The group has “no reason to think that this person had much to do with us,” she added, pointing out that Pause AI’s stance on violence “has always been incredibly clear” and explicitly prohibits it. She also emphasized that the activity occurred on a public, global Discord server distinct from Pause AI US’s organizing channels, and said the suspect “didn’t get any further in onboarding or having any official role.”
Elmore added that Pause AI deliberately vets volunteers and keeps tight control over its messaging to avoid being associated with extreme views.
Weiss-Blatt said the film shows Elmore urging activists to understand what she describes as an urgent timeline toward potential human extinction. “She’s never advocating violence, but is raising the stakes about doom,” Weiss-Blatt said.
“When prominent AI doomers like Eliezer Yudkowsky—author of If Anyone Builds It, Everyone Dies—keep insisting that human extinction is imminent, it should not be surprising when someone is driven to extreme action,” she added. “Young, anxious followers, looking for purpose, can be radicalized by apocalyptic AI rhetoric, even without explicit calls for violence.”
However, Mauro Lubrano, a lecturer at the University of Bath and author of Stop the Machines: The Rise of Anti-Technology Extremism, cautioned that there is a clear distinction between groups that seek to eradicate technology violently and those advocating for regulation or a pause. “I think it’s easy to conflate all of these groups and movements that are trying to raise awareness of some of the dangers of AI,” he said.
The incident at Altman’s home occurred about five months after OpenAI told employees at its headquarters to shelter in place because a 27-year-old man named Sam Kirchner threatened to go to several OpenAI offices in San Francisco to “murder people,” according to callers who notified police that day. Kirchner was a cofounder of Stop AI, a group he launched in 2024 with 45-year-old Guido Reichstadter, both of whom had previously been involved in Pause AI.
To set the record straight about Moreno-Gama, Stop AI wrote that he had “joined the Stop AI public online forum, introduced himself, then asked, ‘Will speaking about violence get me banned?’ After he was given a firm ‘yes,’ he ceased all activities on our forum. This was several months before his alleged criminal activities.”
Valerie Sizemore, one of five coleaders for Stop AI, told Fortune that some of its members are now feeling anxious and worried about getting too associated with the OpenAI incident. “But personally, I think it’s all the more important for the nonviolent organizing we’re doing, to give people something other than violence to do,” she said.
The organization remains focused on its San Francisco–based efforts to protest at frontier lab headquarters, Sizemore added, and also participated in a local “Stop the AI Race” protest last month.
Lubrano, the University of Bath lecturer, pointed out that anti-technology activism, and anti-technology extremism, has been around for a long time—even as far back as the Luddites, the 19th-century English textile workers who opposed machinery and industrialization.
For many, AI represents the sum of all fears when it comes to technology, he explained. “Technology is viewed as a system, and all parts are dependent on one another,” he said. “With AI being deployed in warfare, to monitor worker performance, to monitor people taking part in demonstrations or to ensure that they behave—there’s an element of this technological oligarchy wanting to control us and converging thanks to AI.”
He advised engaging with anti-AI groups rather than dismissing them as technophobes or anti-technology. “The Luddites were not against technology—they were against the unmitigated introduction of technology because it was disrupting their lives. And these concerns were not heard, and eventually the Luddites turned to violence.” Ignoring those concerns, he warned, can fuel resentment and, at the margins, lead to more extreme behavior—though it would be wrong to blame acts of violence on the mere existence of such groups.
Still, independent researcher Weiss-Blatt insisted that the views and actions of groups like Pause AI and Stop AI can still lead to radicalization, which can, in turn, lead to bad outcomes.
“The warning signs were there all along, including the November 2025 lockdown at OpenAI’s offices,” she said. “The real question is how long the people fueling AI panic expect to avoid responsibility for where that radicalization leads, especially for the most vulnerable.”
Pause AI’s Elmore said she believes public understanding of AI issues is likely to deepen, making it harder to conflate peaceful activism with isolated acts of violence. While the topic is still new and often viewed as a single, undifferentiated space, she expects it to become a major focus of national attention.
“People will see it’s not so easy to paint [all of us] with one brush,” she said.



