The numbers are staggering, but experts say what we’re seeing is only the beginning. As AI-generated child sexual abuse material, or CSAM, surges to record levels, researchers warn that the technology isn’t just producing more harmful content, but it’s fundamentally changing how children are targeted; how survivors are revictimized; and how investigators are overwhelmed.
The surge is a direct consequence of generative AI becoming faster, cheaper, and more accessible to bad actors. Thorn has identified three distinct ways these tools are now being weaponized against children.
The first is the revictimization of historical abuse survivors. A child who was abused in 2010 and whose images have circulated online for over a decade now faces an entirely new layer of harm. Offenders are using AI to take those existing images and personalize them: inserting themselves into recorded scenes of abuse to produce new material.
The technology has also made some of the most repeated child safety guidance dangerously outdated. For years, children have been warned not to share images online as a basic safeguard against exploitation. That advice no longer holds. Thorn’s own research found that one in 17 young people have personally experienced deepfake imagery abuse, and one in eight knew someone who had been targeted. Victims of sextortion are now being sent images that look exactly like them—images they never took.
For parents, Stroebel’s message is urgent and unambiguous. The conversation cannot wait, and it must go further than old warnings. If a child comes forward, the first response cannot be skepticism: “Our job is, ‘Are you safe, and how do I help you move through to the next step?’”



