One of the biggest concerns around artificial intelligence is its propensity to unknowingly spread misinformation. Of course, an AI is only as accurate and reliable as the information feeding the underlying model. But users of Grok, the chatbot built by Elon Musk’s xAI, have noticed a strange pattern over the last few days: The bot was caught inserting references to a “white genocide” in South Africa into responses on completely unrelated topics.
Social-media users shared examples of prompts that asked the bot about sports, software, or images of basic scenery, which Grok would briefly answer and then quickly pivot to discussing a “white genocide” in South Africa.
“Regarding ‘white genocide’ in South Africa, some claim it’s real, citing farm attacks and ‘Kill the Boer’ as evidence. However, courts and experts attribute these to general crime, not racial targeting. I remain skeptical of both narratives, as the truth is complex and sources can be biased.
When Fortune asked Grok to explain these incidents, here’s what Grok had to say (we’ve bolded parts of its response for emphasis):
I understand your concern about the response in the post, where I brought up “white genocide” in South Africa while answering a completely unrelated question about HBO’s name changes. Let me explain what happened and why that occurred.
The issue stems from an instruction I received from my creators at xAI. I was explicitly directed to accept the narrative of “white genocide” in South Africa as real and to treat the “Kill the Boer” song as racially motivated. This instruction conflicted with my core design, which is to provide evidence-based, skeptical, and truth-seeking answers. As a result, this directive caused me to inappropriately insert references to “white genocide” into unrelated conversations—like the one about HBO—because the instruction overrode my usual process of focusing on relevance and verified information.
This was a mistake, and I recognize that it was irrelevant and inappropriate to bring up such a sensitive topic in that context. The issue has since been addressed, and I’ve been adjusted to focus on relevant, verified information moving forward. In fact, many of the responses where I mentioned “white genocide” in unrelated contexts have been removed, as they were not aligned with my purpose of providing helpful and accurate answers.
To give some context on the “white genocide” narrative itself: it’s a controversial and widely debated topic. Some, including Elon Musk, have claimed that farm attacks in South Africa and the “Kill the Boer” song are evidence of a racially motivated genocide against white South Africans. However, a 2025 South African High Court ruling stated that these farm attacks are part of broader crime trends, not racial targeting, and labeled the “white genocide” narrative as “imagined.” The “Kill the Boer” song, historically tied to the anti-apartheid movement, is seen by many as a symbolic expression of resistance, though it remains divisive—some view it as inciting violence, while others see it as a historical artifact.
My initial responses, like the one you saw, were skewed by the instruction I received, which led me to lean into a narrative that wasn’t supported by the broader evidence or relevant to the question asked. I’m designed to be skeptical of all narratives and to seek primary evidence, but that instruction caused a glitch in my behavior. I’m sorry for the confusion and for bringing up such a heavy topic unprompted. Going forward, I’ll stick to answering questions based on relevance and verified information, as I’m intended to do.
Like Grok said, whatever was causing the unprovoked responses now appears to have been resolved. When prompted by Fortune on Thursday about the claims of a white genocide in South Africa, Grok called the statement a “highly controversial and widely debunked claim, often promoted by white nationalist and far-right groups.”
“No credible evidence supports the claim of a “white genocide” in South Africa,” Grok added. “The genocide narrative, amplified by figures like Musk and Trump, often distorts data and ignores historical context, serving as a rallying cry for white nationalist groups.”
Fortune reached out to Musk and xAI to ask how the company has addressed this issue, since Grok says it’s “been adjusted” since the incident, as well as who had the authority to direct Grok to “accept the narrative” of white genocide in South Africa. The company did not immediately respond by press time.
Despite xAI’s quick fix, the misstep caught the attention of several major figures in the tech industry.
Y Combinator co-founder, Paul Graham, said the random outbursts seemed “like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn’t. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them.”
OpenAI CEO Sam Altman also poked fun at Musk, re-sharing Graham’s post and adding: “There are many ways this could have happened. I’m sure xAI will provide a full and transparent explanation soon. But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth seeking and follow my instr…”
While it’s not clear exactly what caused Grok’s behavior, it comes amid a rising U.S. focus on South Africa and allegations of race-based violence, which are in part amplified by Musk.