Elon Musk's AI Chatbot Grok appears to have encountered a mistake on Wednesday, which led it to answer dozens of posts on X, with information about "white genocide" in South Africa even if the user doesn't have any information on the topic.
The weird response originates from Grok in account X, which responds to the user through an AI-generated post whenever the user tags @Grok. When asked about unrelated topics, Groke repeatedly told users that “white genocide” and anti-apartheid odes “killed Boole.”
Grok's weird, irrelevant replies remind you that AI chatbots are still a nascent technology and may not always be a reliable source of information. AI model providers have been working to regulate the response of their AI chatbots in recent months, which has led to strange behavior.
Openai was recently forced to return an update to Chatgpt, which caused AI Chatbot to be too sycophantic. Meanwhile, Google's Gemini chatbot refuses to answer or misinformation involving political topics.
In one example of Grok's misconduct, the user asked Grok about the salary of professional baseball players, and Grok replied: "The claim of 'white genocide' in South Africa is highly controversial."
Several users on X had a messy, weird interaction with the Grok AI chatbot on Wednesday.
It's not clear what's the reason for Grok's weird answer, but Xai's chatbot has been manipulated in the past.
In February, Grok 3 seemed to briefly review the unflattering of Elon Musk and Donald Trump. At the time, Xai engineering director Igor Babuschkin seemed to confirm that Grok was briefly directed to do so, although the company quickly reversed the instructions after the rebound attracted more attention.
Regardless of the reason for this error, Grok seems to be responding to users more normally now. A spokesperson for XAI did not immediately respond to TechCrunch's request for comment.