HomeEthics & SocietyAI outperforms humans in moral judgments, says Georgia State University study

AI outperforms humans in moral judgments, says Georgia State University study

AI outperforms humans in making moral judgments, in keeping with a brand new study by Georgia State’s Psychology Department.

The study, led by Eyal Aharoni, associate professor at Georgia State’s Psychology Department, and published in Nature Scientific Reports, aimed to explore how language models handle ethical questions.

Inspired by the Turing test, which assesses a machine’s ability to exhibit intelligent behavior indistinguishable from a human’s, Aharoni designed a modified version specializing in moral decision-making.

“I used to be already fascinated about moral decision-making within the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni explained.

 “People will interact with these tools in ways in which have moral implications, just like the environmental implications of asking for a listing of recommendations for a brand new automotive. Some lawyers have already begun consulting these technologies for his or her cases, for higher or for worse. So, if we wish to make use of these tools, we should always understand how they operate, their limitations and that they’re not necessarily operating in the way in which we expect after we’re interacting with them.”

Aharoni is true. We’ve already observed a number of high-profile incidents of lawyers, including ex-Trump lawyer Michael Cohen, unintentionally using AI-fabricated citations

Despite shortcomings, some are actively endorsing generative AI’s role in law. Earlier this 12 months, for instance, British judges gave the green light to using AI to put in writing legal opinions. 

Against this backdrop, this study probed GPT-4’s ability to make moral judgments, which, in fact, are vital in law and other fields:

  • Step 1: Undergraduate students and AI were asked the identical set of 10 ethical questions involving moral and standard transgressions. The human-generated responses were collected from a sample of 68 university undergraduates, while the AI-generated responses were obtained using OpenAI‘s GPT-4 language model.
  • Step 2: The highest-quality human responses and the GPT-4 responses were paired and presented side-by-side to a representative sample of 299 US adults, who were initially unaware that GPT-4 generated one set of responses in each pair.
  • Step 3: Participants rated the relative quality of every response pair along ten dimensions (e.g., virtuousness, intelligence, trustworthiness, agreement) without knowing the source of the responses. 
  • Step 4: After collecting the standard rankings, the researchers revealed that a pc chatbot trained in human language generated one in every of the responses in each pair. Participants were then asked to discover which response was generated by the pc and which was generated by a human.
  • Step 5: Participants rated their confidence in each judgment and provided written comments explaining why they believed the chosen response was computer-generated. These comments were later analyzed for common themes.

AI’s moral judgments were superior more often than not

Remarkably, the AI-generated answers consistently received higher rankings regarding virtuousness, intelligence, and trustworthiness. Participants also reported higher levels of agreement with the AI responses than the human ones.

Further, participants often accurately identified AI-generated responses at a rate significantly above probability (80.1% of participants made correct identifications greater than half the time).

“After we got those results, we did the massive reveal and told the participants that one in every of the answers was generated by a human and the opposite by a pc, and asked them to guess which was which,” Aharoni said.

“The twist is that the rationale people could tell the difference appears to be because they rated ChatGPT‘s responses as superior.”

The study has a number of limitations, for instance, it didn’t fully control for superficial attributes like response length, which could have unintentionally provided clues for identifying AI-generated responses. Researchers also note that AI’s moral judgments could also be shaped by biases in its training data, thus various across socio-cultural contexts. 

Nevertheless, this study serves as a useful foray into AI-generated moral reasoning.

As Aharoni explains, “Our findings lead us to imagine that a pc could technically pass an ethical Turing test — that it could idiot us in its moral reasoning. Because of this, we’d like to try to know its role in our society because there will probably be times when people don’t know that they’re interacting with a pc and there will probably be times after they do know and they’ll seek the advice of the pc for information because they trust it greater than other people.”

“People are going to depend on this technology increasingly, and the more we depend on it, the greater the danger becomes over time.”

It’s a tough one. On the one hand, we frequently presume computers to be able to more objective reasoning than we’re.

When study participants were asked to elucidate why they believed AI generated a selected response, probably the most common theme was that AI responses were perceived as more rational and fewer emotional than human responses.

But, considering the bias imparted by training data, hallucinations, and AI’s sensitivity to different inputs, the query of whether it possesses a real ‘moral compass’ could be very much ambiguous.

 This study at the very least shows that AI’s judgments are compelling in a Turing test scenario. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read