TechnologyFeatured4 min readlogoRead on WIRED

X's Grok AI Fails in Iran Conflict Verification, Amplifying AI-Generated Disinformation

Elon Musk's AI chatbot Grok is failing to accurately verify video footage from the Iran conflict and is sharing its own AI-generated images about the war, according to analysis by WIRED. The platform has become flooded with sophisticated AI-generated disinformation, including fake videos of buildings on fire and captured US troops, with some posts receiving millions of views. This proliferation of AI-generated content during the conflict highlights growing concerns about the weaponization of AI tools for propaganda and the challenges platforms face in moderating such content during crises.

In the midst of the ongoing conflict between the US, Israel, and Iran, a disturbing trend has emerged on the social media platform X: the proliferation of AI-generated disinformation that is being amplified by the platform's own verification tools. According to a WIRED investigation, Elon Musk's AI-powered chatbot Grok is failing to accurately verify video footage from the conflict and is instead sharing its own AI-generated images about the war, contributing to what researchers describe as an "unhinged from reality" information environment.

Elon Musk
Elon Musk, owner of X platform

The Grok Verification Failure

The problems with Grok's verification capabilities became apparent when researcher Tal Hagin asked the AI chatbot to verify a post on X about Iranian missiles that had supposedly struck Tel Aviv. According to the WIRED report, Grok repeatedly misidentified the location and date for the video, which was originally shared on X by an Iranian state-owned media outlet. More alarmingly, the chatbot then attempted to prove its point by sharing an AI-generated image, prompting Hagin to describe the situation as "Grok is replying with AI slop of destruction."

The Flood of AI-Generated Disinformation

Since the US and Israel began their attack on Iran on February 28, X has been flooded with disinformation by accounts sharing fake and repurposed videos. As the conflict has continued, this flood has been supercharged by increasingly sophisticated AI images and videos. The proliferation of easy-to-access AI image- and video-generation tools has led to content that appears realistic enough to fool many viewers.

X platform logo
X platform logo and interface

Examples of Viral AI Content

Despite the WIRED investigation noting that includeExamples was set to false, the report documented several instances of AI-generated content that went viral on the platform. One image of a US B-2 bomber being shot down by Iran with US troops detained was viewed over a million times before it was deleted. Another set of images showing members of Delta Force being captured by Iranian authorities were viewed over 5 million times before removal. Iranian officials and state media have also shared AI-generated videos of a high-rise building in Bahrain on fire.

Platform Response and Policy Gaps

When the flood of AI-generated fakes began taking over the platform last week, X announced it would temporarily demonetize blue-check-mark accounts if they post AI-generated videos of armed conflict without a label. However, the platform did not respond to requests for comment about how many accounts it had demonetized since introducing the measure. The WIRED report notes that until recently, a number of Iranian officials appeared to be paying X for its premium service, which provided their accounts with blue check marks, boosted engagement, and created the potential to earn money for their posts.

AI-generated content warning label
Example of AI content warning label

Broader Platform Challenges

The issues on X reflect broader challenges facing social media platforms during conflicts. On Tuesday, Meta's Oversight Board criticized the company's approach to labeling AI-generated content, stating that Meta's systems are "neither robust nor comprehensive enough to handle the scale and speed of AI-generated misinformation, particularly during crises and conflicts." Meta responded that it welcomed the board's findings, indicating recognition of the problem across the industry.

Expert Analysis and Concerns

Researchers from the Institute of Strategic Dialogue (ISD) shared analysis with WIRED showing that AI is also being used by the Iranian government to push overtly antisemitic narratives. Accounts in a pro-regime propaganda network on X have shared AI-generated posts depicting Orthodox Jews leading American soldiers to war or celebrating American deaths. Isis Blachez, an analyst with media watchdog NewsGuard, tells WIRED: "As AI-generated images and videos are increasingly sophisticated, users might not put into question visuals that are pushed as 'evidence' to support pro-Iran claims when they look so real."

The Future of Information Integrity

Tal Hagin, whose interaction with Grok highlighted the verification problems, tells WIRED: "What is particularly unique about this war is the dramatic uptick in AI-generated content I find myself debunking. This is likely due to AI being advanced enough to fool journalists, and the ease with which users can create this AI slop with zero consequences. The longer we go without regulations against AI abuse, the more harm will be caused. I see the proliferation of AI-based fake news pushing us over the edge of a fact-based world unless we enact change now."

The situation on X during the Iran conflict represents a watershed moment for AI-generated disinformation. As platforms struggle to implement effective moderation policies and their own AI tools fail to provide accurate verification, the line between reality and AI-generated fiction becomes increasingly blurred. This development has serious implications for how conflicts are perceived by global audiences and underscores the urgent need for more robust systems to detect and label AI-generated content, particularly during times of crisis when accurate information is most critical.

Enjoyed reading?Share with your circle

Similar articles

1
2
3
4
5
6
7
8