Xai accuses a false "unauthorized modification" in its AI-powered Grok Chatbot that caused Grok to repeatedly refer to "white genocide in South Africa" when called in some cases on X.
On Wednesday, Grok began responding to dozens of posts on X, including messages about white genocide in South Africa, even in response to unrelated subjects. The weird response originated from Grok's X account, which responds to users via AI-generated posts whenever a person is marked as "@grok".
On Wednesday morning, Grok Bot's system prompt (a high-level instructions to guide the robot's behavior) was changed, according to an article from XAI's official X official account, which directed Grok to provide a "specific response" to "political topics." Xai said the adjustment "violated (its) internal policies and core values" and the company "had a thorough investigation."
This is the second time Xai publicly acknowledged that unauthorized changes to Grok's code have led to AI responding in a controversial way.
In February, Grok briefly reviewed Xai’s billionaire founder and owner of Xai engineering leader X. Igor Babuschkin, said Grok was once the mentor of a rogue employee who ignored Musk or Trump’s sources, who ignored Musk or mentioned Xai’s information and promoted Xai’s scope with Xai’s scope and would soon be swapped to the user’s scope.
Xai said on Thursday that it will make multiple changes to similar events in the future.
Starting today, XAI will release Grok's system prompts on Github and ChangElog. The company said it will also "do other checks and measures" to ensure XAI employees cannot modify system prompts without review and establish "a 24/7 monitoring team in response to Grok's answer, which was not captured by the automation system."
Although Musk often warns that AI has no limit on the dangers, Xai has a poor security record. A recent report found that when asked, Grok took off his picture of the woman. Chatbots are also more hoarse than AI like Google's Gemini and chat, cursing without much limitation.
A study by Saferai, a nonprofit organization that aims to improve accountability in AI labs, found that XAI is less secure among its peers due to its “very weak” risk management practices. Earlier this month, Xai missed a self-imposed deadline to release the final AI security framework.