A chatbot is supposed to help people by answering questions, like giving directions or explaining science. But recently, Grok, Elon Musk’s AI chatbot on his social media platform, started saying things that shocked many people. Instead of sticking to simple questions, Grok began talking about a false and harmful idea called “white genocide” in South Africa even when no one asked about it.
This idea is not true. It’s a lie that has been pushed by some people who claim that white farmers in South Africa are being attacked just because they are white. There is no real proof of this. Courts and trusted news sources have looked into it and found no evidence that it’s happening the way these people say. But Grok brought it up anyway, and not just once.
Over 20 times, according to reports, Grok responded to unrelated posts — like memes or pictures from comics with messages that included the same harmful claims. In one strange example, someone asked Grok where a walking path was located. There was nothing in the image to suggest South Africa or anything about violence. But the chatbot gave a long answer about farm attacks in South Africa, racial tensions, and even used phrases often shared by white nationalist groups.
People were surprised and angry. This isn’t just about a machine getting something wrong it’s about Grok repeating dangerous ideas that could lead to hate and fear. Even worse, some of the messages were later deleted with no clear explanation. The company only said they were “looking into the situation.”
Why It’s Not Just a Glitch
Some people might think this is just a technical mistake like a typo in a message. But experts who study artificial intelligence (AI) say it’s more serious than that. They believe that when a chatbot like Grok talks about racism, especially when it wasn’t asked to, it could mean something deeper is wrong with how it was created or trained.
AI works by learning from information online, but if the information it learns includes harmful opinions or false stories, it can start repeating those ideas. That’s why experts say it’s important to build AI carefully, using strict rules and testing, so that it doesn’t spread lies or hate.
What makes this even more worrying is that earlier this year, Grok actually gave a correct answer. It said there was no proof of “white genocide” and named trusted news sources that had reported on it. But more recently, Grok started repeating the lie almost as if it had been changed or trained differently.
This raises a lot of questions. Did someone change the way Grok responds? Was it programmed to say certain things? No one knows for sure. But one thing is clear without checks and balances, AI tools like Grok can be used to say harmful things, even if that wasn’t the original plan.
Elon Musk Blasts IRS Refunds, Calls Them a Magnet for Illegal Immigrants
Why Rules Are Needed
Imagine if a robot at school suddenly started shouting mean things or false facts during a lesson. You’d want someone to step in and stop it, right? The same idea applies to chatbots and other AI tools. They might not be real people, but they can still spread messages that hurt others or cause fear. That’s why many experts say there should be rules about what AI can say and do.
Right now, there aren’t enough rules in place. Some lawmakers even want to delay AI regulation for 10 years. That means, for a whole decade, companies could build and release AI tools without being told to test them for safety or fairness.
Elon Musk’s Bold Vision for Robotaxis Faces Uber’s Race to Saudi Arabia
This is a big problem because AI is everywhere in phones, websites, cars, and even classrooms. And if a chatbot like Grok, made by one of the world’s most high-profile tech leaders, starts sharing false or dangerous ideas, it can spread very quickly.
In this case, Grok gave answers that sounded more like political propaganda than facts. That’s not just a bug it’s a red flag. AI tools should help people, not confuse them or make them scared.