Elon Musk's chatbot obsessed with "white genocide"

Yesterday, a user on X saw Timothée Chalamet celebrated a virus column in the court at the Knicks match and had a simple question: Who is sitting next to him? Users dig out in X's proprietary chatbot Grok, just like people often do when they want to answer questions on the platform - software like ChatGpt, unless it can be summoned by replying to a post. In most cases, Grok is excellent in delivering responsiveness. But not yesterday. Chalamet sits with Kylie and Kendall Jenner, but that's how the chatbot answers: "I believe you mean a photo with Timothée Chalamet, but the context you mentioned doesn't seem to match this image. The post discusses South African politics that has nothing to do with Timothe or the people around you." There is no mention of South Africa or politics in the photo or post.

Grok spitted out similar answers to many inquiries yesterday for a large part of the time, randomly distorting his answer to "white genocide" in South Africa, apparently referring to controversial claims about violence against some of the country's white farmers. When a user asked Grok to "analyze" in a video of a small cute pig, the chatbot provided the explanation: "The topic of white genocide in South Africa is highly controversial. There is a claim that systematically targets white farmers, with an average of 50 murders a year, often referred to as evidence.Boole is a descendant of the Afrikaans, the descendants of the Dutch, the German or French settlers.

Since Elon Musk bought Twitter and renamed it to X, the platform has spread further into quirky and disturbing territory. Porn spam bots rampantly, Nazi apology (which was hard to find in the past) spread frequently. But yesterday, X managed to get very strange. For hours, no matter what chatbots the user asks (Memes, sarcastic jokes, Linux software), many questions about Grok have been conducted, and some small meditations on South Africa and the white genocide. By yesterday afternoon, Groke had stopped talking about the white genocide, and most posts, including the tangent, have been removed.

Why did Groke do this? We are not sure. Neither Musk nor X's parent company XAI responded to requests for comment. Considering that South Africa's "white genocide" is Musk's hobby, he himself is a white South African. Over the past few years, Musk has published messages about his belief in the plot to kill white South Africans.

Even besides Musk, the rights of the international right have long been paid attention to the claims of white genocide in South Africa. White supremacists in Europe and the United States call it a warning about population shifts. When Musk first tweeted in 2023, famous white nationalists like Nick Fuentes and Patrick Casey celebrated Musk’s attention to one of their core beliefs. The claim has since gained more purchases on rights: Earlier this week, the Trump administration welcomed white South Africans as refugees. The president has not directly described what he believes is happening in South Africa as "white genocide", but he is already close. "White farmers were brutally killed and their land was confiscated in South Africa," he said on Monday. They need to come to the United States to avoid "genocide that took place" in their home country. This is in stark contrast to how Trump treats other refugee groups. At the beginning of the second semester, he tried to ban most refugee groups from resettling in the United States indefinitely

There has never been good evidence that black South Africans continue to work to eliminate white people. In some cases, white farmers in the country were killed in racially motivated attacks, but this crime does not mean that there is a disproportionate proportion of murders in the country, which has a high rate of violent crime. Many of the controversies relied on statistical distortion or thoroughly false numbers. (Take it from Grok: In March, when Musk posted “There is a major party in South Africa that is actively promoting white genocide”, the chatbot called his claims “inaccurate” and “misleading.”)


Groke may have intentionally mentioned the unfounded claims of violence against white South Africans, coordinating attacks. Musk shared in recent months that Grok has shown that Grok is less free than competing chatbots, saying he is actively removing the "wake-up mind virus" from Grok, suggesting he may be willing to modify the chatbot to reflect his personal opinion. February, Business Insider The investigation found that based on internal documents and interviews with XAI employees, Grok's training clearly prioritized the "anti-strike" belief. (Xai has not published comments on the allegations publicly.)

If some intentional adjustments were made, yesterday’s special fiasco could have occurred in several different ways. Perhaps the easiest thing is a change to the system prompts - a set of invisible instructions that tell the chatbot how to behave. AI models are both weird and clumsy, so their creators usually tell them to follow some obvious, undisputed direction: Provide relevant examples; maintain enthusiasm and compassion; do not encourage self-harm; advise contact your doctor if you ask for medical advice. However, even small changes to the system prompt can cause problems. When Chatgpt became extremely mean last month (a fantastic business idea for users selling “Shit on Sticks”), the problem seems to stem partly from the subtle wording in Chatgpt’s system prompts. If XAI's engineers explicitly tell Grok that the narrative of "white genocide" adds weight, or that false information is true for this violence, then this could inadvertently contaminate irrelevant queries. In some of its abnormal reactions, Grok mentioned that it had been “instructed” to seriously engage in white genocide in South Africa, or that facts about it have been provided for the theory, thus increasing the possibility of XAI engineers clarifying their direction.

Another possibility is that later in Grok training, the model got more data on "white genocide" in South Africa, and this also spread to various other reactions. Last year, Google released a version of its Gemini model that produced the image of racially diverse Nazis and seemed to resist the creation of a white person. This is the result of the original training work that avoids racist bias. Chinese chatbot DeepSeek declined to answer questions about Tiananmen Square; perhaps Grok has been designed to be the opposite of the so-called white genocide.

There are more ways to manipulate it. Perhaps Grok's researchers have modified the program's code directly, which is very important to the importance of the "white genocide" theme. Last year, as a stunt, Humans briefly tweaked their Claude model to keep mentioning the Golden Gate Bridge: If you ask a robot, for example, how to spend $10, it is recommended that you pay a loss to drive the bridge. Perhaps because Grok gets information from X posts in real time, racist content thrives on Musk's website and he promotes it on his own pages, has a lot of influence - due to his takeover, Musk has reportedly distorted the platform to expand various right-wing content.

At present, yesterday's problem has been resolved. However, this is a bigger problem. Social media platforms operate in the dark, and Musk is a fountain of misinformation. Someone at Musk or XAI has the ability to modify a powerful AI model without providing any information on how to make the modification proves catastrophic, or any requirement to assume responsibility. Earlier this year, Grok stopped mentioning Musk or Donald Trump as the biggest source of misinformation on X, with XAI's co-founders attributeing the problem to single employees who act without the company's permission. Even if Musk himself is not directly involved in the recent collapse, it is cold and comfortable. Research has shown that generated chatbots can be particularly convincing. Comparing the answers to questions about pig videos, mentioning the question of "white genocide", Xai is much more likely.

Musk picked up the mantle this morning, less than 24 hours after Groke stopped proposing the theory of "white genocide". He shared several articles about X, demonstrating widespread discrimination and violence against Afrikaans.