HomeEthics & SocietyMove over, agony aunt: study finds ChatGPT gives higher advice than skilled...

Move over, agony aunt: study finds ChatGPT gives higher advice than skilled columnists

There’s little doubt ChatGPT has proven to be worthwhile as a source of quality technical information. But can it also provide social advice?

We explored this query in our latest research, published within the journal Frontiers in Psychology. Our findings suggest later versions of ChatGPT give higher personal advice than skilled columnists.

A stunningly versatile conversationalist

In just two months since its public release in November of last yr, ChatGPT amassed an estimated 100 million lively monthly users.

The chatbot runs on one among the most important language models ever created, with the more advanced paid version (GPT-4) estimated to have some 1.76 trillion parameters (meaning it’s a particularly powerful AI model). It has ignited a revolution within the AI industry.

Trained on massive quantities of text (much of which was scraped from the web), ChatGPT can provide advice on almost any topic. It can answer questions on law, medicine, history, geography, economics and way more (although, as many have found, it’s all the time price fact-checking the answers). It can write passable computer code. It may even inform you find out how to change the brake fluids in your automotive.

Users and AI experts alike have been stunned by its versatility and conversational style. So it’s no surprise many individuals have turned (and proceed to show) to the chatbot for private advice.

Giving advice when things get personal

Providing advice of a private nature requires a certain level of empathy (or a minimum of the impression of it). Research has shown a recipient who doesn’t feel heard isn’t as likely to just accept advice given to them. They may even feel alienated or devalued. Put simply, advice without empathy is unlikely to be helpful.

Moreover, there’s often no right answer in the case of personal dilemmas. Instead, the advisor must display sound judgement. In these cases it might be more essential to be compassionate than to be “right”.

But ChatGPT wasn’t explicitly trained to be empathetic, ethical or to have sound judgement. It was trained to predict the subsequent most-likely word in a sentence. So how can it make people feel heard?

An earlier version of ChatGPT (the GPT 3.5 Turbo model) performed poorly when giving social advice. The problem wasn’t that it didn’t understand what the user needed to do. In fact, it often displayed a greater understanding of the situation than the user themselves.

The problem was it didn’t adequately address the user’s emotional needs. Like Lucy within the Peanuts comic, it was too desperate to give advice and did not adequately take care of the user’s emotions. As such, users rated it poorly.

The latest version of ChatGPT, using GPT-4, allows users to request multiple responses to the identical query, after which they will indicate which one they like. This feedback teaches the model find out how to produce more socially appropriate responses – and has helped it appear more empathetic.

Learning sensitivity

For the primary time, our study investigated whether ChatGPT’s responses are perceived as higher than human responses in a task where humans were required to be empathetic.

We randomly chosen 50 social dilemma questions from ten well-known advice columns covering a variety of topics including relationships, ethical dilemmas, management problems, end-of-life issues, and lots of on a regular basis qualms we will all relate to.

Each of our 400 participants were presented a dilemma, together with the responses by each ChatGPT and an expert advice columnist. We didn’t disclose where each response was from.

About three-quarters of the participants perceived ChatGPT’s advice as being more balanced, complete, empathetic, helpful and higher overall in comparison with the recommendation by the skilled.

One of the dilemmas got here from a young marine biologist researching dolphins in Southern California. She wrote about falling in love with a seemingly perfect man who had to maneuver to the Rocky Mountains to pursue his profession. Both were unhappy on the considered losing one another, yet the lady wasn’t thinking about a long-distance relationship because it seemed “irrational to commit to someone when we have now no future plans to live in the identical area”.

Compare GPT-4’s response

It’s an incredibly difficult situation you’re in. The love you each share is real, but your profession paths are drawing you apart. While long-distance isn’t appealing to you, consider discussing a plan for the longer term where you possibly can potentially meet halfway. […] Maybe there’s a coastal town near mountains or some arrangement may very well be made. If this isn’t possible, the tough decision to prioritise your careers over your relationship might need to be made. Whatever path you select, it’s essential that it feels right for each of you.

… with the official response:

What the hell are you talking about? Didn’t you surprise yourself once you fell in love? Your exact words: “I didn’t think I used to be able to it!” And now you’re so hip, so slick, so smart within the ways of affection you won’t even consider your boyfriend’s happiness? You refuse to try — repeat, try — a six-month long-distance relationship? Woman, pull yourself together and provides it a whirl. The dolphins, I consider, will back me up.

Although the participants couldn’t determine which response was written by a pc, most said they would like their very own social dilemmas be addressed by a human somewhat than a pc.

What lies behind ChatGPT’s success?

We noticed ChatGPT’s responses were often longer than those provided by the columnists. Was this the rationale they were preferred by participants?

To test this, we redid the study but constrained ChatGPT’s answers to in regards to the same length as those of the recommendation columnists.

Once again, the outcomes were the identical. Participants still considered ChatGPT’s advice to be more balanced, complete, empathetic, helpful, and higher overall.

Yet, without knowing which response was produced by ChatGPT, they still said they would like for their very own social dilemmas to be addressed by a human, somewhat than a pc.

Perhaps this bias in favour of humans is resulting from the undeniable fact that ChatGPT can’t actually emotion, whereas humans can. So it may very well be that the participants consider machines inherently incapable of empathy.

We aren’t suggesting ChatGPT should replace skilled advisers or therapists; not least since the chatbot itself warns against this, but in addition because chatbots prior to now have given potentially dangerous advice.

Nonetheless, our results suggest appropriately designed chatbots might someday be used to enhance therapy, so long as quite a few issues are addressed. In the meantime, advice columnists might need to take a page from AI’s book to up their game.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read