Artificial intelligence (AI), now an integral part of our everyday lives, is becoming increasingly accessible and ubiquitous. Consequently, there’s a growing trend of AI advancements being exploited for criminal activities.
One significant concern is the ability AI provides to offenders to produce images and videos depicting real or deepfake child sexual exploitation material.
This is particularly important here in Australia. The CyberSecurity Cooperative Research Center has identified the country as the third-largest market for online sexual abuse material.
So, how is AI being used to create child sexual exploitation material? Is it becoming more common? And importantly, how do we combat this crime to better protect children?
Spreading faster and wider
In the United States, the Department of Homeland Security refers to AI-created child sexual abuse material as being “the production, through digital media, of child sexual abuse material and other wholly or partly artificial or digitally created sexualized images of children.”
The agency has recognized a variety of ways in which AI is used to create this material. This includes generated images or videos that contain real children, or using deepfake technologies, such as de-aging or misuse of a person’s innocent images (or audio or video) to generate offending content.
Deepfakes refer to hyper-realistic multimedia content generated using AI techniques and algorithms. This means any given material could be partially or completely fake.
The Department of Homeland Security has also found guides on how to use AI to generate child sexual exploitation material on the dark web.
The child safety technology company Thorn has also identified a range of ways AI is used in creating this material. It noted in a report that AI can impede victim identification. It can also create new ways to victimize and revictimize children.
Concerningly, the ease with which the technology can be used helps generate more demand. Criminals can then share information about how to make this material (as the Department of Homeland Security found), further proliferating the abuse.
How common is it?
In 2023, an Internet Watch Foundation investigation revealed alarming statistics. Within a month, a dark web forum hosted 20,254 AI-generated images. Analysts assessed that 11,108 of these images were most likely criminal. Using UK laws, they identified 2,562 that satisfied the legal requirements for child sexual exploitation material. A further 416 were criminally prohibited images.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Deepfake, AI or real? It’s getting harder for police to protect children from sexual exploitation online (2024, June 25)
retrieved 25 June 2024
from https://techxplore.com/news/2024-06-deepfake-ai-real-harder-police.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.