HomeEthics & SocietyReplacing frontline employees with AI could be a bad idea — here’s why

Replacing frontline employees with AI could be a bad idea — here’s why

AI chatbots are already widely utilized by businesses to greet customers and answer their questions – either over the phone or on web sites. Some corporations have found that they’ll, to some extent, replace humans with machines in call centre roles.

However, the available evidence suggests there are sectors – resembling healthcare and human resources – where extreme care must be taken regarding the usage of these frontline tools, and ethical oversight could also be essential.

A recent, and highly publicised, example is that of a chatbot called Tessa, which was utilized by the National Eating Disorder Association (NEDA) within the US. The organisation had initially maintained a helpline operated by a mixture of salaried employees and volunteers. This had the express goal of assisting vulnerable people affected by eating disorders.

However, this yr, the organisation disbanded its helpline staff, announcing that it will replace them with the Tessa chatbot. The reasons for this are disputed. Former employees claim that the shift followed a call by helpline staff to unionise. The vice chairman of NEDA cited an increased variety of calls and wait times, in addition to legal liabilities around using volunteer staff.

Whatever the case, after a really temporary period of operation, Tessa was taken offline over reports that the chatbot had issued problematic advice that would have exacerbated the symptoms of individuals searching for help for eating disorders.

It was also reported that Dr Ellen Fitzsimmons-Craft and Dr C Barr Taylor, two highly qualified researchers who assisted within the creation of Tessa, had stipulated that the chatbot was never intended as a substitute for an existing helpline or to supply immediate assistance to those experiencing intense eating disorder symptoms.

Significant upgrade

So what was Tessa designed for? The researchers, alongside colleagues, had generated an observational study highlighting the challenges they faced in designing a rule-based chatbot to interact with users who’re concerned about eating disorders. It is sort of a captivating read, illustrating design selections, operations, pitfalls and amendments.

The original version of Tessa was a conventional, rule-based chatbot, albeit a highly refined one, which is one which follows a pre-defined structure based on logic. It couldn’t deviate from the standardised pre-programmed responses calibrated by its creators.

Their conclusion included the next point: “Rule-based chatbots have the potential to achieve large populations at low price in providing information and straightforward interactions but are limited in understanding and responding appropriately to unanticipated user responses”.

AI chatbots are already widely used to interact with customers or users of a service.
Tero Vesalainen / Shutterstock

This might appear to limit the uses for which Tessa was suitable. So how did it find yourself replacing the helpline previously utilized by NEDA? The exact chain of events is under discussion amid differing accounts, but, in line with NPR, the hosting company of the chatbot modified Tessa from a rules-based chatbot with pre-programmed responses to 1 with an “enhanced questions and answers feature”.

The later version of Tessa was one employing generative AI, very like ChatGPT and similar products. These advanced AI chatbots are designed to simulate human conversational patterns with the intention of giving more realistic and useful responses. Generating these customised answers relies on large databases of data, which the AI models are trained to “comprehend” through a wide range of technological processes: machine learning, deep learning and natural language processing.

Learning lessons

Ultimately, the chatbot generated what have been described as potentially harmful answers to some users’ questions. Ensuing discussions have shifted the blame from one institution to a different. However, the purpose stays that the following circumstances could potentially have been avoided if there had been a body providing ethical oversight, a “human within the loop” and an adherence to the clear purpose of Tessa’s original design.

It’s necessary to learn lessons from cases resembling this against the background of a rush towards the mixing of AI in a wide range of systems. And while these events took place within the US, they contain lessons for those searching for to do the identical in other countries.

The UK would seem to have a somewhat fragmented approach to this issue. The advisory board to the Centre for Data Ethics and Innovation (CDEI) was recently dissolved and its seat on the table was taken up by the newly formed Frontier AI Taskforce. There are also reports that AI systems are already being trialled in London as tools to help employees – though not as a substitute for a helpline.

Both of those examples highlight a possible tension between ethical considerations and business interests. We must hope that the 2 will eventually align, balancing the wellbeing of people with the efficiency and advantages that AI could provide.

However, in some areas where organisations interact with the general public, AI-generated responses and simulated empathy may never be enough to exchange real humanity and compassion – particularly within the areas of medication and mental health.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read