SocietyFeatured4 min readlogoRead on WIRED

AI-Generated Anti-ICE Videos: Cathartic Fantasy or Misinformation Hazard?

Across Instagram and Facebook, a wave of AI-generated videos depicts people of color confronting ICE agents in scenarios of resistance and justice. These digital creations, often garnering millions of views, serve as a form of political fantasy and catharsis during a period of heightened immigration enforcement. However, experts warn they exist in a complex gray area, potentially bolstering harmful stereotypes while simultaneously eroding public trust in authentic video evidence. This article explores the motivations behind this content, its societal impact, and the dangerous line between digital protest and misinformation.

Social media platforms are witnessing a new genre of political expression: AI-generated videos that imagine confrontations where people of color successfully resist or humiliate Immigration and Customs Enforcement (ICE) agents. These clips, which blend tense drama with bombastic fantasy, have spread rapidly across Instagram and Facebook, accumulating millions of views. They emerge against the backdrop of real-world tensions, including the federal occupation of Minneapolis and the fatal shootings of unarmed U.S. citizens by government officials. While creators and viewers may find these videos cathartic—offering a digital multiverse where accountability exists and resistance doesn't end in tragedy—they raise critical questions about their role in the information ecosystem. Are they a harmless outlet for frustration and a tool for political imagination, or do they contribute to a dangerous stew of misinformation and stereotype reinforcement?

A smartphone screen displaying an Instagram feed with AI-generated video thumbnails
Social media feeds are increasingly populated with AI-generated political content.

The Rise of AI Political Fan Fiction

The videos follow a recognizable pattern. In one, a New York City school principal wields a bat to block masked ICE agents from entering her school, declaring, "Let me show you why they call me bat girl." In another, a server at a Chinese restaurant flings a bowl of hot noodles at officers. The encounters are designed to be victorious and non-lethal, creating what AI creator Nicholas Arter describes as "digital counternarratives." These narratives push against distortions from political opponents and imagine a world where ICE agents are subject to the rule of law. The content is prolific; an account under the name Mike Wayne has uploaded over 1,000 such videos to Instagram and Facebook since early January. The tone ranges from serious to satirical, with clips featuring drag queens chasing officers or a priest ejecting agents from a church, quipping about worshiping an "orange" god.

Motivations: Catharsis, Virality, and Resistance

Understanding why these videos are made requires examining multiple, often overlapping, motivations. For some creators and audiences, they serve as pure catharsis. Filmmaker Willonious Hatcher suggests they resonate because they articulate "a shared desire for liberation," allowing people to fabricate images of justice that feel unattainable in reality. They are a form of political fantasy, essential during times of unrest. However, the drive is not solely ideological. As Arter notes, some creators are likely "chasing virality or monetization by leaning into controversial or emotionally charged content." Joshua Tucker, codirector of NYU's Center for Social Media, AI, and Politics, adds that creating popular anti-ICE content serves a dual purpose: it amplifies a political viewpoint and engineers viral engagement, currying online political capital. The line between authentic resistance and performance for algorithms becomes blurred.

The logo for U.S. Immigration and Customs Enforcement (ICE)
The U.S. Immigration and Customs Enforcement (ICE) logo.

The Double-Edged Sword of AI-Generated Content

While these videos may feel empowering, they introduce significant risks. The primary concern is the erosion of trust in video evidence. Tucker warns that the flood of synthetic content could lead to "a general perception that you just can’t trust videos when you see them anymore." This danger materialized when authentic footage of Alex Pretti confronting ICE officers was posted online; many commenters dismissed it as AI-generated, despite confirmation from his family. This "liar's dividend"—where the existence of fakes casts doubt on real evidence—undermines crucial documentation of state actions. Furthermore, at a time when protesters are labeled "domestic terrorists," videos predominantly showing people of color in confrontational stances with authority could, as Arter cautions, bolster harmful narratives and provide "permission" for heightened repression. The technology used for resistance can also be a tool for the state, as seen when the White House posted an altered photo of a civil rights attorney to discredit her.

Navigating the Future of Digital Protest

The proliferation of anti-ICE AI videos signals a new chapter in online activism and misinformation. They are a symptom of a political climate where people feel systemic protection is out of reach, leading them to "dream loudly of fighting back," as Hatcher observes. However, their impact is paradoxical. They offer a space for collective imagination and critique but simultaneously risk muddying the waters of factual discourse and reinforcing the very stereotypes activists fight against. As AI tools become more accessible and their output more convincing, the challenge will be to harness their potential for advocacy without compromising the integrity of evidence and public understanding. The videos are more than just fan fiction; they are a diagnosis of a society's fractures and a test of its ability to discern truth in an increasingly synthetic media landscape.

Enjoyed reading?Share with your circle

Similar articles

1
2
3
4
5
6
7
8