HomeEthics & SocietySnapchat’s ‘creepy’ AI blunder reminds us that chatbots aren’t people. But because...

Snapchat’s ‘creepy’ AI blunder reminds us that chatbots aren’t people. But because the lines blur, the risks grow

Artificial intelligence-powered (AI) chatbots have gotten increasingly human-like by design, to the purpose that some amongst us may struggle to tell apart between human and machine.

This week, Snapchat’s My AI chatbot glitched and posted a story of what looked like a wall and ceiling, before it stopped responding to users. Naturally, the web began to query whether the ChatGPT-powered chatbot had gained sentience.

A crash course in AI literacy could have quelled this confusion. But, beyond that, the incident reminds us that as AI chatbots grow closer to resembling humans, managing their uptake will only get more difficult – and more necessary.

From rules-based to adaptive chatbots

Since ChatGPT burst onto our screens late last yr, many digital platforms have integrated AI into their services. Even as I draft this text on Microsoft Word, the software’s predictive AI capability is suggesting possible sentence completions.

Known as generative AI, this relatively latest sort of AI is distinguished from its predecessors by its ability to generate latest content that’s precise, human-like and seemingly meaningful.

Generative AI tools, including AI image generators and chatbots, are built on large language models (LLMs). These computational models analyse the associations between billions of words, sentences and paragraphs to predict what ought to return next in a given text. As OpenAI co-founder Ilya Sutskever puts it, an LLM is

[…] just a extremely, really good next-word predictor.

Advanced LLMs are also fine-tuned with human feedback. This training, often delivered through countless hours of low-cost human labour, is the rationale AI chatbots can now have seemingly human-like conversations.

OpenAI’s ChatGPT remains to be the flagship generative AI model. Its release marked a serious leap from simpler “rules-based” chatbots, equivalent to those utilized in online customer support.

Human-like chatbots that talk a user fairly than them have been linked with higher levels of engagement. One study found the personification of chatbots results in increased engagement which, over time, may turn into psychological
dependence. Another study involving stressed participants found a human-like chatbot was more prone to be perceived as competent, and subsequently more prone to help reduce participants’ stress.

These chatbots have also been effective in fulfilling organisational objectives in various settings, including retail, education, workplace and healthcare settings.

Google is using generative AI to construct a “personal life coach” that may supposedly help individuals with various personal and skilled tasks, including providing life advice and answering intimate questions.

This is despite Google’s own AI safety experts warning that users could grow too dependant on AI and should experience “diminished health and wellbeing” and a “lack of agency” in the event that they take life advice from it.

Friend or foe – or simply a bot?

In the recent Snapchat incident, the corporate put the entire thing right down to a “temporary outage”. We may never know what actually happened; it may very well be yet one more example of AI “hallucinatng”, or the results of a cyberattack, and even just an operational error.

Either way, the speed with which some users assumed the chatbot had achieved sentience suggests we’re seeing an unprecedented anthropomorphism of AI. It’s compounded by a scarcity of transparency from developers, and a scarcity of basic understanding amongst the general public.

We shouldn’t underestimate how individuals could also be misled by the apparent authenticity of human-like chatbots.

Earlier this yr, a Belgian man’s suicide was attributed to conversations he’d had with a chatbot about climate inaction and the planet’s future. In one other example, a chatbot named Tessa was found to offer harmful advice to people through an eating disorder helpline.

Chatbots could also be particularly harmful to the more vulnerable amongst us, and particularly to those with psychological conditions.

A brand new uncanny valley?

You could have heard of the “uncanny valley” effect. It refers to that uneasy feeling you get if you see a humanoid robot that looks human, but its slight imperfections give it away, and it finally ends up being creepy.

It seems the same experience is emerging in our interactions with human-like chatbots. A slight blip can raise the hairs on the back of the neck.

One solution is likely to be to lose the human edge and revert to chatbots which might be straightforward, objective and factual. But this is able to come on the expense of engagement and innovation.

Education and transparency are key

Even the developers of advanced AI chatbots often can’t explain how they work. Yet in some ways (and so far as industrial entities are concerned) the advantages outweigh the risks.

Generative AI has demonstrated its usefulness in big-ticket items equivalent to productivity, healthcare, education and even social equity. It’s unlikely to go away. So how can we make it work for us?

Since 2018, there was a big push for governments and organisations to deal with the risks of AI. But applying responsible standards and regulations to a technology that’s more “human-like” than some other comes with a bunch of challenges.

Currently, there isn’t a legal requirement for Australian businesses to reveal using chatbots. In the US, California has introduced a “bot bill” that might require this, but legal experts have poked holes in it – and the bill has yet to be enforced on the time of writing this text.

Moreover, ChatGPT and similar chatbots are made public as “research previews”. This means they often include multiple disclosures on their prototypical nature, and the onus for responsible use falls on the user.

The European Union’s AI Act, the world’s first comprehensive regulation on AI, has identified moderate regulation and education as the trail forward – since excess regulation could stunt innovation. Similar to digital literacy, AI literacy ought to be mandated in schools, universities and organisations, and must also be made free and accessible for the general public.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read