Artificial intelligence is rapidly exacerbating the problem of child sexual abuse material (CSAM) online, with a reported 14% increase in AI-generated imagery in 2025. This surge presents a critical challenge to investigators, as synthetic content becomes increasingly indistinguishable from real-life abuse depictions. The Internet Watch Foundation (IWF), a leading non-profit in this field, documented over 8,000 AI-generated images and videos in the past year, underscoring the accelerating nature of the problem.
The Rising Tide of Synthetic Abuse
The IWF identifies AI-generated content through clear errors, victim reports, or creator disclosures. While still a smaller portion of total CSAM, its growth rate is alarming. The report highlights that over 3,400 AI-generated pieces were full-motion videos – disturbingly realistic depictions enabling complex, multi-person abuse scenarios.
A key trend is the increasing severity of AI-generated content: 65% of these videos depicted extreme abuse (rape, torture, bestiality) versus only 43% of non-AI material. This suggests perpetrators are leveraging AI to create more explicit and complex content than previously possible. The IWF’s CEO, Kerry Smith, warns that this technology enables “infinite violations with unprecedented ease”.
How Perpetrators Exploit AI
The study reveals an active ecosystem of offenders developing and sharing AI tools on the dark web. Discussions include trading custom AI models and databases designed to generate abusive material, with some offering “custom courses” teaching users to create AI-generated images of minors.
The barrier to entry is shockingly low: some models require only a single reference image to produce CSAM. While simpler content is becoming accessible to anyone, skilled creators are producing longer, more sophisticated abuse videos. One creator was thanked over 3,000 times for a 30-minute AI-generated video.
The Limits of Detection and the Need for Regulation
The IWF acknowledges its findings represent only a partial view of the problem, as analysts are restricted from accessing encrypted spaces or content behind paywalls. The true scale of AI-generated CSAM is likely far greater.
The report urges the European Union to implement a bloc-wide ban on AI-generated abuse content and the tools used to create it, including prohibiting personalized, unshared content. Smith argues this should be a “minimum standard with no exceptions”.
Legislators have extended the ePrivacy Directive for now, giving time to establish long-term legal frameworks. However, they insist measures must be proportional and focus on flagged content, not mass surveillance. The IWF also seeks to amend the EU AI Act to classify systems capable of generating CSAM as “high risk”, subjecting them to rigorous testing.
This growing crisis demands urgent action. AI’s ability to rapidly scale and intensify child exploitation necessitates a comprehensive regulatory response that balances safety with privacy. Without intervention, the proliferation of synthetic abuse imagery will continue to overwhelm existing countermeasures.





















