“When the family is arguing about politics and they ask for my expert opinion.”
Thirty-one million people have viewed the clip. More than 1.6 million liked it. The comments are full of adoration: “My time to shine.” “They’re not ready for the truth.” A verified user asks why everyone is “glorifying fascism,” and is drowned out by replies.
And if you linger on that reel—or anything like it—you’ll quickly find that it’s almost quaint compared with what comes next.
Roughly 1.4 million people watched that video; 142,000 liked it. Comments include lines like: “We owe the big man an apology,” and “He was right about everything.”
After Fortune brought these clips to Instagram owner Meta’s attention, but before the company offered an official comment, the company scrubbed the clips.
This got 3.2 million views. More than 250,000 likes and shares.
Within minutes, a clear pattern emerges. This content is not isolated, and it’s not niche. It’s ambient. It’s seemingly everywhere. And it’s algorithmically arranged to look like you’re the one “discovering” the truth; a feed that, once nudged in a certain direction, abruptly begins to resemble anti-Semitic and racist propaganda.
Instagram’s algorithm rewards whatever maximizes watch time and shares, and in 2025 that has included conspiratorial, racist, or anti-Semitic memes packaged as humor or even a kind of aesthetic. Monetization programs, clip-farm networks, and incentives to sponsor with third-party products fuel that dynamic, turning extremist-flavored content into a profitable engagement strategy for creators.
In a statement to Fortune, Meta said: “We don’t want this kind of content on our platforms, and brands don’t want their ads to appear next to it.” The company added that it had included “the relevant violating content in our database” so that it could remove “copies” if someone tries to upload them again.
Yet minutes after Meta sent its statement, this reporter opened Instagram Reels and saw another ad from JPMorgan Chase sitting directly above a reel from the anti-Semitic meme account @goyimclub. The reel used a familiar Holocaust-denial setup—“If I have 15 ovens baking cookies 24/7, how many years would it take to bake 6 million cookies?”—a favorite trope of these sorts of accounts, designed to mock the death toll of the Holocaust and suggest the real number was far lower, often falsely claimed to be 271,000.
Immediately after the JPMorgan Chase ad, another reel surfaced—this one from the anti-Semitic account @gelnox.exe. It showed what looked like a ChatGPT conversation asking, “When did Spain expel the Jews?” (with “Jews” censored), followed by “1492.” Then: “When did the Spanish Golden Age start?” Again: “1492.” The implication, obviously, was that Spain’s prosperity began only after removing Jewish people. That reel had more than 5 million views and 316,000 likes.
Or, as one Pakistani Gen Z creator who earns money posting anti-Semitic reels told Fortune: “Those videos don’t get banned anymore.”
In a statement, Meta said: “While this story makes a number of claims, the facts are clear: In just the first half of 2025, we actioned nearly 21 million pieces of content for violating our prohibition on Dangerous Organizations and Individuals.” At first, Meta said that it had proactively detected nearly 99% of this content, before saying the actual percentage is in the low 90s. Meta added that its commitment to tackling anti-Semitism is “unchanged,” and that it removed the “violating content and accounts flagged to us.”
Meta did not address Fortune’s questions about how the posts Fortune flagged had been able to generate millions of views, or how they had been able to stay up for so long.
But the anti-Semitism and racism that Fuentes champions can hardly be called fringe when Instagram Reels trafficking in the same tropes routinely reach millions of views.
The creators behind these videos were clear in conversations with Fortune about why they make them: money. Henry, a 26-year-old tech worker in the U.K. who runs a far-right meme page with 90,000 followers (@notchillim), who asked to withhold his last name to avoid retaliation at work, told Fortune he has made “over £10,000” from T-shirt sales and shout-outs, and that posts referencing Hitler or the Holocaust “always get more traction.”
Fortune reached out to Whop for comment but received no response.
A U.S. tech worker in his twenties, who makes similarly anti-Semitic content and requested anonymity to avoid retaliation at work, says he made nearly $3,000 from Instagram’s bonus and referral programs before being demonetized. He said his most “offensive and political” posts drove the fastest audience growth. He added that he is Jewish and did not believe in the content himself, but said he had posted it in hopes of gaining enough followers to eventually delete the posts and then remonetize.
In fact, none of the three creators interviewed by Fortune claimed to have strong ideological motives beyond finding the memes vaguely amusing. All said controversial content is one of the only reliable, and easiest, paths to visibility—and therefore income. (Fortune was unable to independently verify the creators’ claims about their income.)
Several said the change was immediate: Reels that once got flagged or throttled were suddenly hitting millions of feeds. The Pakistani clip-farmer said those videos no longer “get banned,” and the British meme-page owner said his reach “jumped way higher.”
“During the early 2020s, these companies poured enormous resources into moderation,” Krieger said. “What we’re seeing now is the opposite, a conscious pullback, plus a redirecting of talent toward consumer AI.”
Krieger said he doesn’t believe that Meta is trying to platform hateful content; rather, it’s optimizing for “freedom of speech,” at the expense of other values. “I would say that is an ethical value: autonomy, people’s decision to choose,” Krieger said. “But it’s certainly coming at the cost of other ethical values, like safety and fairness.”
Most of the ecosystem, though, is built to avoid scrutiny. These accounts hide behind faceless branding or influencer shells, funneling traffic to crypto platforms, supplements, merch, or subscription services. In some cases, the creator isn’t even real: Renowned disinformation scholar Joan Donovan told Fortune that she thinks some accounts are entire “personas” that are built around clip-farmed content, using stock photos, semi-AI face sets, or lightly edited images to make racist reels appear tied to an attractive influencer. “Platforms don’t care about the quality of the content so much as the engagement it elicits,” Donovan said.
Meme scholar Aidan Walker described it as an “ironic dog whistle”—material that is plainly anti-Semitic, but stylized and self-referential enough that users can deny belief while still spreading the narrative.
The memes are so layered in jokes, edits, and esoteric references that “you actually can’t tell whether it’s racist or not … but if you know, you know,” Walker told Fortune.
The point isn’t that viewers literally believe in hollow-earth portals under Antarctica; it’s that by pretending to, they’re signaling a stance: Institutions are rigged, and only people fluent in this lore “really see through” reality.
The appeal, he argues, is emotional as much as ideological. The videos are competently edited, dense with references, and designed to feel like contraband.
“You watch one and think, ‘I shouldn’t be watching this. This is horrible,’” Walker said.
That transgression then becomes a bonding ritual—“we’ve gone there together; now you’re my brother because you get this and others don’t”—and a kind of “forbidden wisdom,” a dark explanation that makes the world feel like it secretly makes sense, he added.
But that esoteric world doesn’t just have the potential for violence—violence has already manifested from it.
The teen’s ideology is still under investigation. The Reuters report notes that the student was in several violent Telegram channels. But his references weren’t invented in a vacuum: They’re the same symbols saturating Instagram Reels feeds today.
The Jewish Gen Z tech worker behind one of the meme accounts said he believed that the violence was part of a pendulum effect.
“Everything was so anti–white people 10 years ago, and now there’s a bunch of pissed-off white people,” he said. “So, I don’t really know how bad it’s going to get, but violence seems much more likely than in the past.”
Did he not feel a sense of responsibility?
“I’m kind of just taking other accounts’ stuff and reposting it, so I guess that makes me feel like I’m not contributing as much to the whole thing,” he said, his voice trailing off into nervous laughter. “But, I mean, yeah, objectively, it’s not a great thing.”
His account, @violent_autism, which had nearly 100,000 followers, went dark soon after the interview. It’s unclear if he took it down himself or if Instagram did.
These accounts reach far beyond Gen Z fans: @forbiddenclothes has a notable fan who follows exactly 7,350 accounts on Instagram, from fitness influencers to meme pages to hunting gear stores to crypto traders. And while there’s no way to prove he’s one of the millions watching Nazi-leaning content with “unclear intent,” Donald Trump Jr., the president’s son, is listed as a follower of @forbiddenclothes. He did not respond to Fortune’s request for comment.
Update: Nov. 20, 2025: This report has been updated to clarify the role of Meta’s donations to Trump’s inauguration and his accusations of interference in the 2020 election. Additional context has also been added about Reuters reporting on the episode in Jakarta and on Zvika Krieger’s tenure at Meta.



