# The Alarming Reality of Generative AI: Child Exploitation Risks
Written on
Understanding the Dark Side of Generative AI
Generative AI has become a hot topic in 2023, with tools like ChatGPT capturing public interest. However, similar to the early days of the Internet, the technology isn't solely beneficial and can lead to significant harm. Currently, individuals with malicious intentions are leveraging generative AI to perpetrate scams and create harmful images, particularly those depicting child sexual exploitation.
Deepfakes: A Growing Concern
Deepfakes have gained notoriety for their ability to superimpose one person's face onto another's in videos. This technology has made it increasingly challenging to distinguish between real and fake content. A notable example is the deepfake of Tom Cruise, which, while seemingly amusing, is part of a larger issue involving numerous explicit deepfake videos circulating on platforms like Twitter.
Platforms such as Twitter permit adult content, provided it's marked as sensitive. Recently, reports surfaced indicating that deepfake pornography featuring popular TikTok influencers, including Addison Rae and Charli D'Amelio, has proliferated online. This phenomenon isn't limited to celebrities; anyone can become a victim of deepfake manipulation.
What Exactly Are Deepfakes?
Deepfakes are artificially manipulated videos, audios, or images created using Machine Learning (ML) and Artificial Intelligence (AI) technologies. A significant concern is the alarming percentage of deepfake content—96%—which is pornographic in nature, according to a 2019 DeepTraceLabs report. These manipulations not only threaten individuals' privacy but also pose risks to political integrity and cybersecurity.
The Risk of Child Sexual Exploitation
Recent advancements in generative AI allow individuals with minimal technical skills to create images from simple text prompts. This capability raises grave concerns, especially when it comes to child exploitation. A recent case in Canada highlighted this risk, where a man was sentenced to prison for creating deepfake child pornography.
With the rapid evolution of AI technology, law enforcement struggles to keep pace. Traditional methods of tracking exploitative images are becoming ineffective as generative AI can produce content at incredible speeds without digital footprints, complicating the identification of offenders.
The Accessibility of Malicious Tools
While companies like Dall-E, Midjourney, and Stable Diffusion implement filters to prevent the creation of explicit content, individuals with malicious intent continually find ways to bypass these restrictions. A report from the Washington Post reveals that numerous AI-generated images of child sexual exploitation have emerged on dark web forums, with guides on how to create such content being shared among pedophiles.
Rebecca Portnoff, director of data science at the nonprofit Thorn, emphasizes the da