AI Models Are Being Tricked: Russia’s Disinformation Strategy

More Articles

Tejaswini Deshmukh
Tejaswini Deshmukh
Tejaswini Deshmukh is an editor at RegTech Times, covering financial crimes, sanctions, and regulatory developments. She specializes in RegTech advancements, compliance challenges, and financial enforcement actions.

For years, Russia has been known for spreading misleading and divisive information online. In the past, this was done mainly through social media, targeting human readers directly. However, a new report has found that Russia has changed its approach. Instead of focusing on people, Russian disinformation is now aimed at artificial intelligence (AI) models—the very tools millions rely on for information.

A recent investigation uncovered that a network linked to Russia has been producing millions of articles filled with false or misleading information. These articles are then fed into AI models like ChatGPT, Microsoft Copilot, and xAI’s Grok. The AI systems, which are designed to learn from information across the internet, unknowingly absorb and repeat these false claims.

The strategy is simple: flood the internet with fake but convincing stories, so that AI models pick them up and present them as facts. Since many people today rely on AI summaries instead of reading full articles, they end up trusting and spreading these false narratives without even realizing it.

How AI Models Are Being Tricked

AI models are designed to gather information from a vast number of sources. They use a method called Retrieval-Augmented Generation (RAG), which allows them to fetch real-time data from the internet when answering questions. While this is useful for keeping information up to date, it also makes AI vulnerable to manipulation.

Russia’s Treacherous Deal with US to Profit from Ukraine’s Mineral Wealth

A Russian-linked network called Pravda has reportedly published over 3.6 million articles in 2024 alone. These articles often appear on websites that look real and professional. Because of this, AI models mistakenly consider them as legitimate sources of information.

A recent audit found that when asked about certain topics, AI models repeated false Russian claims about one-third of the time. Almost half of the time, they either debunked the misinformation or refused to provide an answer. However, in some cases, the AI chatbots even cited Pravda articles as legitimate sources.

One example of this manipulation was a claim that Ukraine’s President banned a social media platform linked to the U.S. This claim was completely false—Ukraine had never even offered the platform in its country. Yet, several AI chatbots presented the false information as fact, further spreading the lie.

This method of disinformation is particularly dangerous because AI models speak in a confident and authoritative tone. When an AI system presents misinformation, people are more likely to believe it without questioning its accuracy.

The Role of Fake Websites and Social Media

Russia does not spread disinformation directly under its own name. Instead, it operates through third-party organizations that appear to be independent. One such organization, an IT firm based in Russian-occupied Crimea, has been linked to this massive propaganda effort.

To make their false stories seem more believable, Russian-linked groups create fake websites that use trustworthy-looking domain names. Some of these even use Ukraine’s own website extension (.ua) to trick people and AI systems into believing they are legitimate sources. These websites then flood the internet with misleading content that AI models unknowingly pick up.

In an Alarming Spy Plot 3 Bulgarian Agents Related to Russian Espionage Operation Exposed in UK

The Impact on Search Engines and AI Summaries

Social media platforms, particularly X (formerly Twitter), have also been flooded with these false claims. For instance, a wave of posts falsely accused Ukraine’s leadership of stealing military aid for personal gain. This claim originated from the same network of fake websites. Once the false information spreads on social media, it increases the chances of AI models treating it as credible news.

Search engines like Google have traditionally ranked websites based on trustworthiness and reliability. However, it is unclear how AI models make these decisions when generating responses. Some AI systems have been found to list unknown websites alongside well-established news sources, making it difficult for users to differentiate real news from misinformation.

As AI-generated search summaries become more common, people are less likely to visit original news websites. This means they are more vulnerable to manipulation, as they may never see the context or the sources of the information they are given.

These findings show how AI models are being targeted and influenced in ways many people do not yet realize. By flooding the internet with propaganda, Russian-linked groups are successfully making AI chatbots repeat and spread their messages, further shaping public perception without people even knowing where the information originally came from.

- Advertisement -spot_imgspot_img

Latest

error: Content is protected !!