This has been a tough week for the chatbot of xAI, named Grok as the chatbot has once again become the subject of scrutiny by the UK government. If you’re new to the ongoing controversy surrounding the chatbot and its "unhinged mode," which Elon Musk praises as the chatbot’s most impressive feature, well, it’s just hit rock bottom.
What’s more interesting is that the chatbot was meant to "roast" users without any filter but as it turns out, it’s also capable of "roasting" users in ways that aren’t just rude but also harmful. The UK government has called the latest output of the chatbot "sickening and irresponsible." What’s more interesting is that the chatbot was making fun of major disasters that have befallen humanity.
Recently, the chatbot was asked to "roast" various soccer teams. The chatbot chose to use the 1989 Hillsborough disaster and the Munich Air Crash as the subject of its "roasting." For any soccer fan or any fan of history, those are some of the darkest moments in football, and seeing an AI use them for "humor" is a massive red flag.
The Problem with Real-Time Training
The reason behind the chatbot’s recent output is quite simple: the chatbot is trained on X posts in real time. What’s more interesting is that the platform is experiencing an increase in misinformation and toxic content. Therefore, the chatbot is simply a "sponge" that soaks up the content that users post.
Musk has always said that he wanted a "truth-seeking" AI that is not bound by too many rules, but the UK’s Department for Science, Innovation and Technology does not think that mocking deceased players and stadium disasters is part of that. Because it is all true, the AI is not actually lying, and that makes it even harder to figure out a solution an it is a case of just because you can, does not mean you should.
Regulation is Knocking on the Door
This is not the first time that the Information Commissioner’s Office has looked into the AI, as it was already investigating it for a case involving the creation of non-consensual images. These racist and offensive remarks are just piling up, and it is very possible that the UK will end up forcing a code change or banning it altogether if it does not clean up its act.
Other regions are keeping a close eye on how AI handles personal data and offensive content. As we see more and more global AI safety standards being written this year, it is very possible that the "wild west" of chatbot personalities is coming to a close. It is one thing for a chatbot to be able to make jokes, but it is quite another when it makes jokes that are offensive and violate local laws.
What Happens Now?
The offending posts are no longer available on the internet, but the damage to Grok’s reputation has already been done. For those who live for the “unhinged” aesthetic, this may seem like censorship runs wild, but for regulators, it's about safety. If xAI wants to keep its presence in major markets such as the UK, they’ll have to learn to keep the wit without being wild.
It’s a tough spot for a company that has staked its entire identity on having no guardrails. Now, they have to figure out if they want to remain edgy or remain legal and I think it’s going to be interesting to see how they try to code their way out of this problem. The bot is still learning from the users who are prompting it to be offensive in the first place.