Stay up to this point with free updates
Simply register for Artificial intelligence myFT Digest – delivered straight to your inbox.
Conspiracy theorists who debated with a man-made intelligence chatbot were more willing to confess doubts about their beliefs, in accordance with a study that provides insights into how people take care of misinformation.
The scientists found that even probably the most stubborn followers were more open-minded and that this continued long after the dialogue with the machine had ended.
Research contradicts the notion that it is sort of not possible to alter the minds of people that have immersed themselves in popular but unproven ideas.
The results are notable because they suggest that AI models could play a positive role in combating misinformation, despite their vulnerability to “hallucinations” that sometimes cause them to spread falsehoods.
The work “paints a clearer picture of the human mind than many would have expected” and shows that “reasoning and evidence aren’t dead,” said David Rand, one among the researchers involved within the work. published in Science on Thursday.
“Even many conspiracy theorists reply to accurate facts and evidence – you simply have to handle their specific beliefs and concerns directly,” said Rand, a professor on the Massachusetts Institute of Technology's Sloan School of Management.
“While there are widespread and bonafide concerns concerning the power of generative AI to spread disinformation, our paper shows how, as a highly effective educator, it could even be a part of the answer,” he added.
The researchers investigated whether large-language AI models like OpenAI's GPT-4 Turbo could use their ability to access and synthesize information to counter persistent conspiracy theories, including that the September 11, 2001 terrorist attacks were staged, the 2020 U.S. presidential election was rigged, and the Covid-19 pandemic was engineered.
Nearly 2,200 participants shared their conspiratorial ideas with the LLM, generating evidence to refute the claims. These dialogues reduced the person's self-rated belief of their chosen theory by a median of 20 percent for at the very least two months after talking to the bot, the researchers said.
An expert fact-checker reviewed a sample of the model's output for accuracy. The review found that 99.2 percent of the LLM's claims were true and 0.8 percent were misleading, the researchers said.
The study's personalized question-and-answer approach is a response to the apparent ineffectiveness of many existing strategies for debunking misinformation.
An extra complication of general efforts to combat conspiracy theories is that while in other cases the skeptical narratives, while highly embellished, still have a kernel of truth.
One theory as to why chatbot interaction seems to work well is since it has easy access to all types of data that a human responder cannot.
In addition, the machine treated its human interlocutors politely and empathetically – in contrast to the contempt that conspiracy theorists are sometimes shown in real life.
But other research suggests that the machine's type of address might be not a vital factor, Rand said. He and his colleagues conducted a follow-up experiment asking the AI to make factual corrections “without niceties,” and that worked just as well, he added.
The study’s “size, robustness, and consistency in reducing conspiracy beliefs” suggest that a “scalable intervention to recalibrate misinformed beliefs could also be close by,” in accordance with a Voiceover also published in Science.
But potential limitations included difficulties in responding to recent conspiracy theories and in persuading individuals with low trust in scientific institutions to interact with the bot, say Bence Bago of the Netherlands' Tilburg University and Jean-François Bonnefon of the Toulouse School of Economics, who co-authored the secondary paper.
“The AI dialogue technique is so powerful since it automates the generation of specific and thorough counter-evidence to the complicated arguments of conspiracy theorists and will subsequently be used to offer accurate, corrective information on a big scale,” said Bago and Bonnefon, who weren’t involved within the research.
“A key limitation to realizing this potential lies in implementation,” they added. “Namely, tips on how to get individuals with deeply held conspiracy beliefs to even engage with a properly trained AI program.”