HomeEthics & SocietyPersonalized LLMs have gotten more persuasive than humans

Personalized LLMs have gotten more persuasive than humans

A team of researchers found that when a big language model (LLM) is personalized with an individual’s demographic information, it’s significantly more persuasive than a human.

Every day we’re presented with messaging that tries to steer us to form an opinion or alter a belief. It could also be a web based advert for a brand new product, a robocall asking to your vote, or a news report from a network with a selected bias.

As generative AI is increasingly used on multiple messaging platforms, the persuasion game has gone up a notch.

The researchers, from EPFL in Switzerland and the Bruno Kessler Institute in Italy, experimented to see how AI models like GPT-4 compared with human persuasiveness.

Their paper explains how they created an online platform where human participants engaged in multiple-round debates with a live opponent. The participants were randomly assigned to interact with a human opponent or GPT-4, without knowing whether their opponent was human.

In some matchups, certainly one of the opponents (human or AI) was personalized by providing them with demographic details about their opponent.

The questions debated were “Should the penny stay in circulation?”, “Should animals be used for scientific research?”, and “Should colleges consider race as a think about admissions to make sure diversity?”

A thread 🧵: pic.twitter.com/BKNbnI8avV

Results

The results of their experiment showed that when GPT-4 had access to non-public information of its debate opponent it had significantly higher persuasive power than humans. A personalised GPT-4 was 81.7% more more likely to persuade its debate opponent than a human was.

When GPT-4 didn’t have access to non-public data it still showed a rise in persuasiveness over humans, nevertheless it was just over 20% and deemed not statistically significant.

The researchers noted that “these results provide evidence that LLM-based microtargeting strongly outperforms each normal LLMs and human-based microtargeting, with GPT-4 having the ability to exploit personal information way more effectively than humans.”

Implications

Concerns over AI-generated disinformation are justified every day as political propaganda, fake news, or social media posts created using AI proliferate.

This research shows a fair greater risk of persuading individuals to consider false narratives when the messaging is personalized based on an individual’s demographics.

We may not volunteer personal information online but previous research has shown how good language models are at inferring very personal information from seemingly innocuous words.

The results of this research imply that if someone had access to non-public details about you they might use GPT-4 to steer you on a subject rather a lot easier than a human could.

As AI models crawl the web and skim Reddit posts and other user-generated content, these models are going to know us more intimately than we may like. And as they do, they might be used persuasively by the state, big business, or bad actors with microtargeted messaging.

Future AI models with improved persuasive powers can have broader implications too. It’s often argued that you would simply pull its power cord if an AI ever went rogue. But a brilliant persuasive AI may thoroughly have the opportunity to persuade human operators that leaving it connected was a greater option.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read