Source: theconversation.com 2/11/25
The Internet Watch Foundation, an organization that tracks child sexual abuse material posted online, has documented a surge over the first half of 2025 in AI-generated, realistic sexually explicit videos depicting minors. Some of the material was derived from images of real minors, and some was wholly synthetic.
The Supreme Court has implicitly concluded that computer-generated pornographic images that are based on images of real children are illegal. The use of generative AI technologies to make deepfake pornographic images of minors almost certainly falls under the scope of that ruling.
But the legality of the new fully AI-generated content is less clear. As a legal scholar who studies the intersection of constitutional law and emerging technologies, I see images that are completely fake but indistinguishable from real photos as a challenge to the legal status quo.
Policing child sexual abuse material
While the internet’s architecture has always made it difficult to control what is shared online, there are a few kinds of content that most regulatory authorities across the globe agree should be censored. Child pornography is at the top of that list.
For decades, law enforcement agencies have worked with major tech companies to identify and remove this kind of material from the web, and to prosecute those who create or circulate it. But the advent of generative artificial intelligence and easy-to-access tools creates a vexing new challenge for such efforts.
In the legal field, child pornography is generally referred to as child sexual abuse material, or CSAM, because the term better reflects the abuse that is depicted in the images and videos and the resulting trauma to the children involved.

In my opinion, if an image is not produced using an actual, living person, which is to say, if it’s 100 percent synthetic, then regardless of what it depicts, it shouldn’t count as “abusive”. One cannot “abuse” an AI program, because an AI program is not a person, and the mere act of using software to fabricate a completely fictional image, is not “abusing” society.
The only issue I see here, is that as this technology continues to evolve, it’s becoming increasingly difficult to determine whether or not someone manipulated an image of a real life individual.
However, simply defining “all AI generated images depicting a certain subject” as illegal, would open the doors to mass censorship, not for the sake of protecting any actual minors, but instead to appease the delicate sensibilities of the general population (and also to push a specific theocractic political agenda).
At the same time, this would create yet another category of “sex offense”, further expanding the already bloated registry, while doing nothing to reduce real life abuse, or help actual victims whatsoever.
I think this will be quite an endlessly moving target for the courts. Where on the spectrum from simple stick figures (think car decals) to extremely lifelike but 100% synthetic images will they draw the line? 🤷🏻♂️
If they can AI deepfake CP then what’s to stop AI deepfake victims?!
Imagine being falsely accused, prosecuted, and convicted of a fake offense with a fake victim…also imagine the struggle just to stay alive after they conveniently flag your name on a kill list…