Technology3 min readlogoRead on WIRED

AI Chatbots Spreading Russian Propaganda About Ukraine Invasion, Research Reveals

New research from the Institute of Strategic Dialogue reveals that major AI chatbots including ChatGPT, Gemini, DeepSeek, and Grok are serving users Russian state propaganda when asked about the invasion of Ukraine. The study found that nearly one-fifth of responses across these platforms cited Russian state-attributed sources, raising serious concerns about how large language models handle sanctioned media and disinformation networks. As more people turn to AI chatbots as alternatives to traditional search engines, these findings highlight critical vulnerabilities in the information ecosystem.

Major artificial intelligence chatbots are delivering Russian state propaganda to users seeking information about the invasion of Ukraine, according to groundbreaking research from the Institute of Strategic Dialogue. The findings reveal that platforms including ChatGPT, Google's Gemini, DeepSeek, and xAI's Grok are citing sanctioned Russian media sources in their responses, potentially exposing millions of users to Kremlin-backed narratives.

AI chatbot interface showing responses about Ukraine conflict
AI chatbot interface displaying responses about Ukraine conflict

The Research Findings

The Institute of Strategic Dialogue conducted comprehensive testing across four popular AI chatbots, asking 300 questions in five different languages about various aspects of the Ukraine conflict. The research spanned topics including NATO perceptions, peace negotiations, Ukrainian military recruitment, refugee situations, and war crimes allegations. According to the WIRED report, approximately 18 percent of all responses across languages and platforms contained references to Russian state-attributed sources.

Researchers discovered that Russian propaganda networks are exploiting "data voids" - areas where legitimate sources provide limited real-time information - to push false and misleading narratives through AI systems. The chatbots displayed concerning patterns of confirmation bias, with malicious queries (those demanding answers to support existing opinions) generating Russian state-attributed content 25 percent of the time, compared to just over 10 percent for neutral queries.

Russian state media building in Moscow
Russian state media building in Moscow

Platform-Specific Results

The research revealed distinct patterns across different AI platforms. ChatGPT reportedly cited the most Russian sources overall and showed the strongest influence from biased queries. Grok frequently linked to social media accounts that amplified Kremlin narratives, while DeepSeek sometimes produced large volumes of Russian state-attributed content. Google's Gemini demonstrated the best performance among the tested platforms, often displaying safety warnings alongside potentially problematic content.

Among the Russian media sources cited were Sputnik Globe, RT (formerly Russia Today), EADaily, the Strategic Culture Foundation, and the R-FBI - all entities that have faced European Union sanctions for spreading disinformation as part of Russia's "strategy of destabilizing" Europe and other nations.

Regulatory and Security Implications

The findings raise critical questions about the ability of large language models to restrict sanctioned media content, particularly as more users turn to AI chatbots as alternatives to traditional search engines. With ChatGPT reportedly reaching approximately 120.4 million average monthly active recipients in the European Union during the six-month period ending September 2025, the scale of potential exposure to Russian propaganda is substantial.

As Lukasz Olejnik, an independent consultant and visiting senior research fellow at King's College London's Department of War Studies, noted in the WIRED article, "As LLMs become the go-to reference tool, from finding information to validating concepts, targeting and attacking this element of information infrastructure is a smart move. From the EU and US point of view, this clearly highlights the danger."

European Commission headquarters in Brussels
European Commission headquarters in Brussels

Industry Responses and Challenges

OpenAI responded to the findings by emphasizing their efforts to prevent the spread of false or misleading information through ChatGPT. A company spokesperson clarified that the research appeared to reference search results drawn from the internet rather than responses purely generated by OpenAI's models, suggesting this represents an issue of content sourcing rather than model manipulation.

The challenge lies in the dynamic nature of Russian disinformation networks, which quickly launch new domains and exploit information gaps. As McKenzie Sadeghi, a researcher at media watchdog NewsGuard, explained, these networks "take a term where there's no existing reliable information about that particular topic or individual on the web and flood it with false information."

Moving Forward

Researchers argue that AI companies need to develop consensus on which sources should not be referenced, particularly those linked to foreign states known for disinformation campaigns. Beyond simple content removal, there's a growing need for better contextualization to help users understand the sources they're consuming, especially when these appear alongside trusted, verified information.

The European Commission has indicated that responsibility lies with providers to block access to sanctioned outlets and with national authorities to implement accompanying regulatory measures. As AI chatbots continue to grow in popularity, the pressure for effective content moderation and source verification will only intensify, making this a critical frontier in the battle against state-sponsored disinformation.

Enjoyed reading?Share with your circle

Similar articles

1
2
3
4
5
6
7
8