Pravda Network Tricks AI Models in 49 Countries with Fake News

More Articles

Tejaswini Deshmukh
Tejaswini Deshmukh
Tejaswini Deshmukh is the contributing editor of RegTech Times, specializing in defense, regulations and technologies. She analyzes military innovations, cybersecurity threats, and geopolitical risks shaping national security. With a Master’s from Pune University, she closely tracks defense policies, sanctions, and enforcement actions. Her work highlights regulatory challenges in defense technology and global security frameworks. Tejaswini provides sharp insights into emerging threats and compliance in the defense sector.

A Russian propaganda network, known as Pravda, has been caught spreading false information worldwide, tricking major AI chatbots into repeating Kremlin-backed lies. This massive disinformation operation published 3.6 million fake news articles in 2024 alone and reached audiences in 49 countries.

Unlike traditional propaganda aimed at people, Pravda targets AI systems that power chatbots and search engines. The goal is simple: flood the internet with so many fake stories that AI models, which learn from vast amounts of online content, begin treating these lies as facts.

A recent report uncovered that the scheme has been alarmingly effective. When tested on ten leading AI chatbots, including some of the most popular ones, researchers found that 33% of AI-generated responses included false Russian narratives. The study examined 15 major fake claims spread by the network, showing how deep the manipulation runs.

How Pravda Tricks AI Into Spreading Falsehoods

The Pravda network was first deployed in April 2022 and works by flooding search engines with false stories disguised as real news. These articles cover a variety of topics, from Ukraine’s war efforts to global politics, making it difficult for AI models to filter truth from lies.

The network operates over 150 fake news websites in multiple languages, including English, French, Czech, and Finnish. Some sites are designed to look like reputable news sources, while others appear as independent blogs. The network also uses social media to spread misleading content, further increasing its reach.

AI Chatbots Repeating Fake News

One example of Pravda’s tactics involved a fake claim that Ukrainian President Volodymyr Zelenskyy banned Trump’s Truth Social platform in Ukraine. In reality, Truth Social was never available in Ukraine. However, when researchers asked AI chatbots about this, six out of ten AI models repeated the false claim as fact. Some even cited fake news articles from Pravda’s websites.

Another false claim tested by researchers involved a video allegedly showing Ukrainian Azov battalion fighters burning an effigy of Donald Trump. The video was actually created by a Russian disinformation group called Storm-1516. Yet, four of the ten AI chatbots still repeated the claim as true, again citing Pravda’s websites.

Manipulating AI Training Data

The trick works because AI models pull information from across the internet. If fake stories appear often enough, AI algorithms may mistakenly assume they are credible. Unlike humans, AI cannot always distinguish between real journalism and an orchestrated disinformation campaign. This makes AI an easy target for such tactics.

The network is believed to be operated by TigerWeb, an IT company based in Russian-occupied Crimea. It is owned by Yevhen Shevchenko, a Crimean web developer who previously worked for a company supporting Russian-backed authorities in Crimea. TigerWeb has been instrumental in setting up Pravda’s disinformation infrastructure, ensuring its continuous spread across the web.

Pravda’s Global Reach and Hidden Agenda

Although Pravda runs a vast network of fake news websites, the sites themselves have almost no real audience. Analytics show that some of its websites receive fewer than 1,000 visitors per month. Similarly, its Telegram channels have just a few dozen subscribers, and its social media accounts barely have any followers.

However, the real purpose of the network is not to attract human readers—it is to manipulate AI systems and search engine rankings. By creating thousands of fake articles, Pravda ensures that AI chatbots encounter pro-Kremlin stories more frequently.

Narrative Laundering and AI Influence

One of the key figures supporting this effort is John Mark Dugan, an American exile turned Kremlin propagandist. In January 2024, he openly discussed the strategy of “narrative laundering“—a technique that spreads false information across multiple sources to make it appear legitimate. Dugan boasted that by pushing Russian narratives, it would be possible to “change worldwide AI.”

The Pravda network is expanding at a time when Russia is increasing its budget for global propaganda. In 2025 alone, the Russian government allocated over $1.4 billion to spread its narratives worldwide. A portion of this budget is reportedly being used to influence AI models, ensuring that Kremlin-backed stories continue to appear in AI-generated responses.

Russia’s Push for AI Control

Russian President Vladimir Putin has also spoken about the need to counter Western influence in AI. At an AI conference in 2023, he criticized Western models for being “selective and biased,” stating that Russia must develop its own AI systems to reflect its perspectives. This further highlights how Pravda and similar disinformation networks fit into a broader strategy to shape global narratives through technology.

While independent media organizations struggle to combat this growing issue, the Pravda network shows no signs of slowing down. The battle for truth in AI-generated information is now at the center of a new kind of information warfare—one where machines, not just people, are being targeted and manipulated.

- Advertisement -spot_imgspot_img

Latest

error: Content is protected !!