The Rise of AI-Generated Child Fetish Content: Sora 2's Disturbing New Frontier
OpenAI's Sora 2 video generator is being exploited to create disturbing AI-generated content featuring photorealistic children in suggestive scenarios. From fake toy commercials for vibrating rose pens to play sets themed around convicted sex offenders, these videos are appearing on TikTok and raising serious concerns about AI safety measures. While not technically illegal in many jurisdictions, this content appears designed to farm for predators and highlights significant gaps in platform moderation. The UK's Internet Watch Foundation reports that AI-generated child sexual abuse material has doubled in the past year, with 94% featuring girls. This article examines the troubling trend, the legal gray areas, and the challenges platforms face in moderating contextually nuanced harmful content.
The rapid advancement of AI video generation technology has opened a Pandora's box of ethical and legal challenges, with OpenAI's Sora 2 becoming the latest tool exploited for creating disturbing content. Within just one week of Sora 2's limited release in September 2025, users began generating photorealistic videos featuring AI-created children in suggestive scenarios and posting them to TikTok's For You Page. These videos, while not technically illegal in many jurisdictions, appear designed to farm for predators and test the boundaries of platform moderation systems.

The Disturbing Content Landscape
One particularly concerning example involves a TikTok account named @fujitiva48 that posted what appeared to be a parody TV commercial. The video featured a photorealistic young girl holding a pink, sparkling toy with a bumblebee design on the handle. While described as a pen, the object's floral design, buzzing capability, and name—the Vibro Rose—clearly resembled a sex toy. The video included an "add yours" button with the text "I'm using my rose toy," removing any doubt about the suggestive nature of the content. Commenters responded with outrage, with one stating, "Whoever made this should be investigated."
This represents just one example of a broader trend. WIRED discovered multiple accounts posting similar Sora 2-generated videos featuring rose- or mushroom-shaped water toys and cake decorators that squirt "sticky milk," "white foam," or "goo" onto lifelike images of children. Other disturbing content includes fake commercials for recalled toys like "Epstein's Island Getaway" and "Diddy's Mansion Party," where AI-generated children play with figurines of older men, young women, and baby oil fountains.

Legal and Regulatory Gray Areas
The legal status of this content remains ambiguous in many countries. While these videos would likely trigger investigations if they featured real children, AI-generated fetish content involving digital minors exists in a regulatory gray area. According to WIRED's investigation, new 2025 data from the UK's Internet Watch Foundation shows that reports of AI-generated child sexual abuse material (CSAM) have doubled from 199 cases between January and October 2024 to 426 in the same period of 2025. Alarmingly, 56% of this content falls into Category A—the UK's most serious category involving penetrative sexual activity—and 94% of illegal AI images tracked by IWF feature girls.
Kerry Smith, CEO of the Internet Watch Foundation, emphasizes the gendered nature of this exploitation: "Often, we see real children's likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online." This data has prompted the UK to introduce amendments to its Crime and Policing Bill that would allow "authorized testers" to verify that AI tools cannot generate CSAM.
Platform Responses and Moderation Challenges
OpenAI has implemented several safeguards in Sora 2, including features that prevent adult profiles from messaging teens and outright bans on CSAM. The company's policies state that its platforms "must never be used to exploit, endanger, or sexualize anyone under 18 years old," with OpenAI reporting any child sexual abuse material to the National Center for Missing and Exploited Children. Following WIRED's investigation, OpenAI banned several accounts creating videos like the vibrating rose toys.
However, significant challenges remain in moderating contextually nuanced content. Mike Stabile, public policy director at the Free Speech Coalition, notes the difficulty platforms face: "We already see this struggle with platforms like Facebook. How do they differentiate between a parent sharing a picture of their kid playing in a pool or the bath versus somebody who's sharing something that's meant to be child sex abuse material?" He emphasizes that moderation teams need more diversity and training to recognize fetish content that might not be immediately obvious to those unfamiliar with specific kinks.

The Predatory Ecosystem
Beyond the videos themselves, the surrounding ecosystem reveals more disturbing patterns. British YouTuber D0l1face3 highlighted a clip made with Google's Veo AI showing a coach inspecting a team of overweight young boys in a locker room, touching their stomachs and praising their weight gains. While not explicitly pornographic, the video's comment section contained requests to connect on Telegram—a platform law enforcement has identified as a hub for pedophile networks. A Google spokesperson acknowledged that the concerning aspect was the comments and intent to post these videos on other platforms.
This contextual nuance forms a crucial part of how CSAM operates online, and it's precisely this nuance that AI platforms struggle to police effectively. The videos often appear alongside dark humor content about 9/11 or Princess Diana, potentially exposing those seeking edgy jokes to more predatory material accidentally.
Moving Forward: Safety by Design
The Internet Watch Foundation advocates for a "safety by design" approach, urging AI companies to implement safeguards during the initial development phase rather than as afterthoughts. Kerry Smith states: "We want to see products and platforms which are safe by design, and encourage AI companies to do as much as they can to make sure their products can not be abused to create child sexual abuse imagery."
As AI video generation technology continues to advance, the need for robust, nuanced moderation systems becomes increasingly urgent. While platforms like TikTok have removed some offending content—including the rose toy commercials—other videos remain online, highlighting the ongoing cat-and-mouse game between creators exploiting AI tools and platforms attempting to enforce safety policies. The proliferation of this content underscores the critical importance of developing AI systems with ethical considerations at their core, rather than treating safety as an optional add-on.





