March 14th, 2024: Two teenagers from Miami, Florida, aged 13 and 14, were arrested on December 22, 2023, for allegedly creating and sharing AI-generated nude images of their classmates without consent.
According to a police report cited by WIRED, the teenagers used an unnamed “AI app” to generate the express images of female and male classmates, ages 12 and 13.
The incident, which took place at Pinecrest Cove Academy in Miami, led to the suspension of the scholars on December sixth and was subsequently reported to the Miami-Dade Police Department.
Â
Â
View this post on Instagram
Â
The arrests and charges against the teenagers are believed to be the primary of their kind within the United States related to the sharing of AI-generated nudes.
Under a 2022 Florida law that criminalizes the dissemination of deepfake sexually explicit images without the victim’s consent, the teenagers are facing third-degree felony charges, that are comparable to automobile theft or false imprisonment.
As of now, neither the parents of the accused boys nor the investigator and prosecutor in charge have commented on the case.
The issue of minors creating AI-generated nudes and explicit images of other children has turn into increasingly common at school districts across the country.
While the Florida case is the primary known instance of criminal charges related to AI-generated nude images, similar cases have come to light within the US and Europe.
The impact of generative AI on matters of kid sexual abuse material, nonconsensual deepfakes, and revenge porn has led to numerous states tackling the problem independently, as there may be currently no federal law addressing nonconsensual deepfake nudes.
President Joe Biden has issued an executive order on AI, asking agencies for a report on banning the usage of generative AI to provide child sexual abuse material, and each the Senate and House have introduced laws referred to as the DEFIANCE Act of 2024 to deal with the problem.
Although the naked bodies depicted in AI-generated fake images should not real, they will appear authentic, potentially resulting in psychological distress and reputational damage for the victims.
The White House has called such incidents “alarming” and emphasized the necessity for brand new laws to deal with the issue.
The Internet Watch Foundation (IWF) has also reported that AI image generators are resulting in a rise in child sexual abuse material (CSAM), complicating investigations and hindering the identification of victims.

