HomeEthics & SocietyAI lie detector beats humans and could possibly be socially disruptive

AI lie detector beats humans and could possibly be socially disruptive

Researchers from the University of Würzburg and the Max-Planck Institute for Human Development trained an AI model to detect lies and it could disrupt the best way we engage with one another.

Humans aren’t great at telling whether an individual is lying or telling the reality. Experiments show that our hit rate is around 50% at best and this poor performance dictates how we engage with one another.

The truth-default theory (TDT) says that individuals will typically assume that what an individual tells them is true. The social cost of calling the person a liar is simply too big a risk with our 50/50 lie detection ability and fact-checking isn’t at all times practical within the moment.

Polygraphs and other lie-detecting tech can pick up on data like stress indicators and eye movements but you’re not prone to use considered one of these in your next conversation. Could AI help?

The paper explains how the research team trained Google’s BERT LLM to detect when people were lying.

The researchers recruited 986 participants and asked them to explain their weekend plans with a follow-up explanation supporting the truthfulness of their statement.

They were then presented with the weekend plans of one other participant and asked to put in writing a false supporting statement arguing that these were in truth their plans for the weekend.

BERT was trained on 80% of the 1,536 statements and was then tasked with evaluating the truthfulness of the balance of the statements.

The model was in a position to accurately label an announcement as true or false with an accuracy of 66.86%, significantly higher than the human judges who achieved a 46.47% accuracy rate in further experiments.

Would you employ an AI lie detector?

The researchers found that when participants were presented with the choice to make use of the AI lie detection model, only a 3rd decided to simply accept the offer.

Those who opted to make use of the algorithm almost at all times followed the algorithmic prediction in accepting the statement as true or making an accusation of lying.

Participants who sought algorithmic predictions demonstrated accusation rates of virtually 85% when it suggested the statement was false. The baseline of those that didn’t request machine predictions was 19.71%.

People who’re open to the thought of an AI lie detector usually tend to call BS once they see the red light flashing.

The researchers suggest that “One plausible explanation is that an available lie-detection algorithm offers the chance to transfer the accountability for accusations from oneself to the machine-learning system.”

‘I’m not calling you a liar, the machine is.’

This changes every part

What would occur in our societies if people were 4 times more prone to start calling one another liars?

The researchers concluded that if people relied on AI to be the arbiter of truth it could have strong disruptive potential.

The paper noted that “high accusation rates may strain our social fabric by fostering generalized distrust and further increasing polarization between groups that already find it difficult to trust each other.”

An accurate AI lie detector would have positive impacts too. It could discover AI-generated disinformation and pretend news, assist in business negotiations, or combat insurance fraud.

What concerning the ethics of using a tool like this? Could border agents use it to detect whether a migrant’s asylum claim was true or an opportunistic fabrication?

More advanced models than BERT will likely push AI’s lie detection accuracy toward a degree where human attempts at deception develop into all too easy to identify.

The researchers concluded that their “research underscores the urgent need for a comprehensive policy framework to handle the impact of AI-powered lie detection algorithms.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read