Sexton said his charity organization, which is focused on combating online child sexual abuse, first began fielding reports about abusive AI-generated imagery earlier this year. “They’re taking existing real content and using that to create new content of these victims,” he said. Sexton said IWF analysts discovered faces of famous children online as well as a “massive demand for the creation of more images of children who’ve already been abused, possibly years ago.” Perpetrators could also use the images to groom and coerce new victims. If it isn’t stopped, the flood of deepfake child sexual abuse images could bog investigators down trying to rescue children who turn out to be virtual characters. The report exposes a dark side of the race to build generative AI systems that enable users to describe in words what they want to produce - from emails to novel artwork or videos - and have the system spit it out. The evolution of AI: from pioneer Alan Turing to the era of ChatGPTĪn investigation into the dark web's forums.