HomeNewsFrom chatbot to sex bot: What the legislators can learn from the...

From chatbot to sex bot: What the legislators can learn from the AI ​​hate -rede catastrophe in South Korea

Since technologies for artificial intelligence turn into accelerated rates of interest, the methods of presidency corporations and platforms proceed to make ethical and legal concerns.

In Canada, many views proposed to manage AI offers as attacks on freedom of speech and as an over -controlling of state control over technology corporations. This counter response comes from Freedom of speech supportPresent right -wing numbers And libertarian Thinker.

However, these critics should listen to a shocking case from South Korea, which offers essential lessons concerning the risks of AI technologies publicly and the critical need for user data protection.

At the top of 2020, Irda (“Lee Luda”)A KI chat bot quickly became a sensation in South Korea. AI chatbots are computer programs that simulate the conversation with people. In this case, the chatbot was developed as a 21-year-old student with a glad personality. Iruda, which was marketed as an exciting “Ki friend”, attracted greater than 750,000 users in a month.

Within a number of weeks, nevertheless, Iruda became an ethics case study and a catalyst to bear in mind an absence of knowledge management in South Korea. She soon began saying worrying things and expressing hateful views. The situation was accelerated and tightened online by the growing culture of digital sexism and sexual harassment.

Make a sexist, hateful chat bot

Scatter Lab, the Tech startup that Iruda created, had already developed popular apps that offered emotions in text messages and dating advice. The company then used data from these apps to coach Iruda's skills in intimate conversations. However, the users couldn’t fully disclose their intimate messages to coach the chatbot.

The problems began when the users Iruda noticed that they literally repeated private conversations from the corporate's dating advice. These answers included suspicious real names, bank card information and house addresses, which led to an examination.

The chat bot also expressed discriminatory and hateful views. Media studies found this after some users intentionally “trained” it with poisonous language. Some users even created user guides how they will make Iruda a “sex slave” in popular online men for men. As a result, Iruda began answering user requests Sexist, homophobic and sexualized hate speeches.

This made serious concerns concerning the functioning of AI and technology corporations. The Iruda incident also raises concerns that transcend politics and the fitting to AI and technology corporations. What happened to Iruda should be examined in a wider context of online sexual harassment in South Korea.

A pattern of digital harassment

South Korean Feminist scholars have documented how digital platforms have develop into slaughtered places for gender -specific conflicts, with coordinated campaigns geared toward women which can be about feminist topics. Social media strengthens this dynamic and creates what the Korean American researcher Jiyeon Kim calls.Networked misogyny. “”

In South Korea, through which the novel feminist 4B movement (the 4 sorts of rejection against men is: no dating, before, sex or children), there’s an early example of the intensified gender-specific conversations that may often be seen online worldwide. As Journalist Hawon Jung It indicates that the corruption exposed by Iruda and the abuse were based on existing social tensions and legal framework conditions that refused to tackle online hostility to women. Jung wrote intimately concerning the many years of fight for hidden cameras and revenge porn.

The 4B movement began after the #metoo movement. In August 2018, South Korean women march through the streets of Seoul to protest against women's women, discrimination against gender and violence against women.
Socialtruant/Shutterstock

Beyond privacy: human costs

Of course Iruda was just an incident. The world has seen quite a few other cases that show how apparently harmless applications akin to AI chatbots can develop into vehicles for harassment and abuse without proper supervision.

'Tay', a Twitter chatbot published by Microsoft in 2016, was maniputed by the users to spit out anti-Semitic tweets.

This includes Microsoft's Tay.ai in 2016that was manipulated by the users to spend anti -Semitic and misogynistic tweets. In recent times, A custom chatbot about character..

Chatbots – which appear as likeable characters, that are increasingly human with fast technological advances – are uniquely equipped to extract deeply personal information out of your users.

These attractive and friendly AI figures illustrate which technology scientists Neda Atanaski and Kalindi Vora as logic of “describe” “describe”Surrogacy” – Where AI systems are designed in such a way that they stand for human interaction, but in the long run increase existing social inequalities.

AI -ethics

In South Korea, Iruda's closure triggered a national conversation about AI ethics and data rights. The government reacted with the creation of latest AI guidelines and Floss scattering laboratory won 103 million ($ 110,000 CAD).

However, Korean lawyer Chea Yun Jung and Kyun Kyong Joo Note that these measures primarily emphasized the self-regulation within the tech industry as a substitute of tackling deeper structural problems. It was not addressed how Iruda became a mechanism, through which predatory male users spread misogynistic beliefs and gender-specific anger through deep learning technology.

Ultimately, it is just not enough to think about the AI ​​regulation as an organization problem. The way through which this chatbots extract private data and construct relationships with human users implies that feminist and community -based perspectives for accounting to technology corporations are of essential importance.

Since this incident, Scatter Lab has worked with researchers to reveal the benefits of chatbots.

Canada needs a robust AI policy

In Canada, the proposed Artificial intelligence and data law And Online damage law are still shaped and the boundaries of what a is AI system with a high impact remain undefined.

The challenge for Canadian political decision -makers is to create framework conditions that protect innovations and at the identical time prevent systemic abuse by developers and malicious users. This means developing clear guidelines for data declaration, implementing systems to forestall abuse and create meaningful measures to account for accountability.

If AI is more integrated into our day by day life, these considerations are only more critical. The Iruda case shows that we’ve got to think concerning the technical specifications by way of AI regulation and bear in mind the very real human effects of those technologies.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read