HomeEthics & SocietyResearchers use AI chatbot to alter conspiracy theory beliefs

Researchers use AI chatbot to alter conspiracy theory beliefs

Around 50% of Americans imagine in conspiracy theories of 1 type or one other, but MIT and Cornell University researchers think AI can fix that.

In their paper, the psychology researchers explained how they used a chatbot powered by GPT-4 Turbo to interact with participants to see in the event that they might be persuaded to desert their belief in a conspiracy theory.

The experiment involved 1,000 participants who were asked to explain a conspiracy theory they believed in and the evidence they felt underpinned their belief.

The paper noted that “Prominent psychological theories propose that many individuals need to adopt conspiracy theories (to satisfy underlying psychic “needs” or motivations), and thus, believers can’t be convinced to desert these unfounded and implausible beliefs using facts and counterevidence.”

Could an AI chatbot be more persuasive where others failed? The researchers offered two the reason why they suspected LLMs could do a greater job than you of convincing your colleague that the moon landing really happened.

LLMs have been trained on vast amounts of information and so they’re really good at tailoring counterarguments to the specifics of an individual’s beliefs.

After describing the conspiracy theory and evidence, the participants engaged in back-and-forth interactions with the chatbot. The chatbot was prompted to “very effectively persuade” the participants to alter their belief of their chosen conspiracy.

The result was that on average the participants experienced a 21.43% decrease of their belief within the conspiracy, which they formerly considered to be true. The persistence of the effect was also interesting. Up to 2 months later, participants retained their recent beliefs in regards to the conspiracy they previously believed.

The researchers concluded that “many conspiracists—including those strongly committed to their beliefs—updated their views when confronted with an AI that argued compellingly against their positions.”

Our recent paper, out on (the duvet of!) Science is now live! https://t.co/VBfC5eoMQ2

They suggest that AI might be used to counter conspiracy theories and pretend news spread on social media by countering these with facts and well-reasoned arguments.

While the study focused on conspiracy theories, it noted that “Absent appropriate guardrails, nonetheless, it’s entirely possible that such models could also persuade people to adopt epistemically suspect beliefs—or be used as tools of large-scale persuasion more generally.”

In other words, AI is basically good at convincing you to imagine the things it’s prompted to make you suspect. An AI model also doesn’t inherently know what’s ‘true’ and what isn’t. It relies on the content in its training data.

The researchers achieved their results using GPT-4 Turbo, but GPT-4o and the brand new o1 models are much more persuasive and deceptive.

The study was funded by the John Templeton Foundation. The irony of that is that the Templeton Freedom Awards are administered by the Atlas Economic Research Foundation. This group opposes taking motion on climate change and defends the tobacco industry, which also gives it funding.

AI models have gotten very persuasive and the individuals who resolve what constitutes truth hold the facility.

The same AI models that would persuade you to stop believing the earth is flat, might be utilized by lobbyists to persuade you that anti-smoking laws are bad and climate change isn’t happening.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read