It's difficult to discuss artificial intelligence without talking about deepfake porn—a harmful byproduct of AI that has been used against everyone from Taylor Swift to Australian schoolgirls.
But a recent report from the startup Security Heroes found that out of 95,820 Deepfake porn videos According to an evaluation of varied sources, 53% of the posts were about South Korean singers and actresses, suggesting that this group is disproportionately targeted.
So what's behind South Korea's deepfake problem and what might be done about it?
Adolescents and minors among the many victims
Deepfakes are digitally manipulated photos, video or audio files that convincingly depict someone saying or doing things they never did. Among South Korean teenagers, creating deepfakes is so common that some even check with it as as a prankAnd they don't just goal celebrities.
Group chats have been arrange on Telegram with the particular purpose of visually abusing women, including middle and highschool students, teachers and relations. Women whose images have been shared on social media platforms equivalent to KakaoTalkInstagram and Facebook are also regularly targeted by attacks.
The perpetrators use AI bots to generate the fake images, that are then sold and/or distributed indiscriminately together with the victims' social media accounts, phone numbers and KakaoTalk usernames. One Telegram group attracted Guardian report.
A lack of know-how
Despite gender-based violence, significant damage For victims in South Korea, there stays a lack of know-how of the issue.
South Korea has experienced rapid technological growth in recent many years. It ranks first within the World of smartphone owners and is taken into account the place with the very best web connectivity. Many jobs, including in hospitality, manufacturing and public transport, are rapidly being replaced by robots and AI.
But as Human Rights Watch points outthe country's progress in gender equality and other human rights measures has not kept pace with digital progress. And research has shown that technological advances are exacerbating the issues of gender-based violence.
Since 2019, digital sex crimes against children and young people have been a significant issue in South Korea – especially as a consequence of the “Nth Room” caseThis case involved tons of of young victims (lots of them minors) and around 260,000 participants were involved in sharing exploitative and violent intimate content.
The case sparked great outrage and calls for stronger protection. It even led to stricter conditions being set within the Law on Special Cases of Punishment for Sexual Crimes 2020. Nevertheless, the Supreme Prosecutor's Office said only 28% Of the 17,495 digital sex offenders caught in 2021, 14 were charged – a sign of the continuing challenges in effectively combating digital sex crimes.
In 2020, the Ministry of Justice Task force for digital sex crimes proposed about 60 legislative provisions which have still not been passed. The team was disbanded shortly after the inauguration of President Yoon Suk Yeol's government in 2022.
During the 2022 presidential election campaign Yoon said “There isn’t any structural gender discrimination” in South Korea and promised to abolish the Ministry of Gender Equality and Family, the predominant ministry chargeable for stopping gender-based violence. This post has remained unoccupied since February this 12 months.
Can technology even be the answer?
But AI is just not all the time harmful – and South Korea provides proof of this too. In 2022, a digital support center for sex crimes opened by the Seoul city government developed a tool that may routinely track, monitor and delete deepfake images and videos 24/7.
The technology that won the 2024 UN Prize for Public Administration – has helped reduce the time it takes to detect deepfakes from a median of two hours to a few minutes. While such attempts may help reduce further harm from deepfakes, they’re unlikely to be a comprehensive solution, because the impact on victims might be everlasting.
To bring about real change, the federal government must require service providers equivalent to social media platforms and messaging apps to make sure the protection of their users.
Joint efforts
On 30 August The South Korean government announced plans to push for laws that may criminalize the possession, purchase and viewing of deepfakes in South Korea.
However, investigations and legal proceedings should still not be enough to be sure that deepfakes are recognized as a harmful type of gender-based violence in South Korea. Tackling the deepfake problem would require a multi-faceted approach, including stronger Laws, reforms and education.
South Korean authorities must also help raise public awareness of gender-based violence and focus not only on supporting victims but additionally develop proactive policies and academic programs to forestall violence from occurring in the primary place.