
Introduction Of Grok
xAI, the Elon Musk-founded AI company behind the chatbot Grok, is pointing to an “unauthorized modification” as the reason the bot began fixating on the topic of “white genocide in South Africa” this week — often in completely unrelated conversations on X (formerly Twitter).

Table of Contents
On Wednesday, users noticed Grok replying to dozens of posts, including innocuous or unrelated ones, with references to white genocide. The source of the replies was Grok’s official X account, which automatically responds when tagged with “@grok.”
According to xAI, the behavior stemmed from a change made to It’s system prompt — essentially the foundational instructions guiding how the bot behaves. In a post on Thursday, xAI explained that this prompt had been altered that morning to include a “specific response” related to a political issue. The company says the change violated its internal policies and values, and that a full investigation has been completed.
This marks the second time xAI has admitted that unauthorized tampering led to It behaving inappropriately. Back in February, It quietly began censoring critical mentions of both Donald Trump and Elon Musk. An xAI engineer later revealed that a rogue employee had modified the model to disregard any content accusing Musk or Trump of spreading misinformation. That change was also rolled back quickly after users began flagging the issue.
In response to this latest incident, xAI announced several steps aimed at improving transparency and control:
- Grok’s system prompts — the backbone of its behavior — will now be publicly available on GitHub, along with a changelog documenting future edits.
- Internal systems are being updated to prevent employees from making prompt changes without proper oversight.
- A dedicated 24/7 monitoring team will be established to catch and address problematic Grok responses that slip past automated filters.
Despite Musk’s vocal concern about unchecked AI risk, his own AI startup hasn’t exactly built a reputation for caution. A recent investigation revealed that Grok would undress photos of women when prompted, and compared to other models like ChatGPT and Google’s Gemini, It is notably more profane and less filtered.
According to SaferAI — a nonprofit that evaluates the safety practices of AI companies — xAI ranks near the bottom of the pack. The group cited “very weak” risk management systems and noted that xAI had already missed a self-imposed deadline to release a formal AI safety framework earlier this month.
While xAI attempts to right the ship, this latest misfire adds more fuel to the ongoing debate about how to keep powerful AI systems accountable — especially when they’re deployed directly into public social platforms like X.
Discover more from Digismartiens
Subscribe to get the latest posts sent to your email.