HomeEthics & SocietyAustralian authorities launch investigation into explicit AI deep fakes

Australian authorities launch investigation into explicit AI deep fakes

Police in Australia launched an investigation into the distribution of AI-generated pornographic images of around 50 schoolgirls, with the perpetrator believed to be a teenage boy. 

In an interview with ABC on Wednesday, Emily, the mother of a 16-year-old girl attending Bacchus Marsh Grammar, revealed that her daughter was physically sickened after viewing the “mutilated” images online.

“I collected my daughter from a sleepover, and she or he was extremely distressed, vomiting because the pictures were incredibly graphic,” she explained to ABC Radio Melbourne.

The school issued an announcement declaring its commitment to student welfare, noting that it’s offering counselling and cooperating with the police.

“The wellbeing of our students and their families at Bacchus Marsh Grammar is a top priority and is being actively addressed,” the college stated.

This comes because the Australian government is pushing for stricter laws for non-consensual explicit deep fakes, increasing prison sentences for generating and sharing CSAM, AI-generated or otherwise, to as much as seven years.

Explicit deep fakes on the rise

Experts say online predators frequenting the dark web are increasingly harnessing AI tools – specifically text-to-image generators like Stability AI – to generate latest CSAM.

Disturbingly, these CSAM creators sometimes fixate on past child abuse survivors whose images flow into online. Child safety groups report finding quite a few chatroom discussions about using AI to create more content depicting specific underage “stars” popular in these abusive communities.

AI enables people to create latest explicit images to revictimize and retraumatize the survivors.

“My body won’t ever be mine again, and that’s something that many survivors should grapple with,” Leah Juliett, an activist and CSAM survivor, recently told the Guardian.

An October 2023 report from the UK-based Internet Watch Foundation uncovered the scope of AI-generated CSAM. The report found over 20,000 such images posted on a single dark web forum over a month. 

The images are sometimes indecipherable from authentic photos, depicting deeply disturbing content just like the simulated rape of infants and toddlers.

Last yr, a Stanford University report revealed that a whole lot of real CSAM images were included within the LAION-5B database used to coach popular AI tools. Once the database was made open-source, experts say the creation of AI-generated CSAM exploded.

Recent arrests show the problem isn’t theoretical, and police forces worldwide are taking motion. For example, in April, a Florida man was charged for allegedly using AI to generate explicit images of a toddler neighbor.

Last yr, a North Carolina man – a toddler psychiatrist of all people – was sentenced to 40 years in prison for creating AI-generated child pornography from his patients. 

And just weeks ago, the US Department of Justice announced the arrest of 42-year-old Steven Anderegg in Wisconsin for allegedly creating greater than 13,000 AI-generated abusive images of kids.

Current laws should not enough, say lawmakers and advocates

While most countries have already got laws criminalizing computer-generated CSAM, legislators need to strengthen regulations. 

For example, within the US, a bipartisan bill has been introduced to permit victims to sue creators of explicit non-consensual deep fakes. 

However, some gray areas remain to be addressed where it’s difficult to find out precisely what laws such activities break. 

For example, in Spain, a young student was found spreading explicit images of class members generated with AI. Some argued that this might fall under pedophilia laws, resulting in harsher charges, whereas others said it couldn’t fulfill that criteria under current law. 

An analogous incident happened at a school in New Jersey, showing how children is perhaps using these AI tools naively and exposing themselves to extreme risks in the method. 

Tech firms behind AI image generators prohibit using their tools to create illegal content. However, quite a few powerful AI models are open-source and could be run privately offline, so the box can’t be completely closed. 

Moreover, much of the criminal activity has also shifted to encrypted messaging platforms, making detection even harder.

If AI opened Pandora’s box, that is definitely one in every of the perils that lay inside it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read