HomeNewsSwarms of AI bots can influence people's beliefs and thus endanger democracy

Swarms of AI bots can influence people's beliefs and thus endanger democracy

In mid-2023, across the time Elon Musk renamed Twitter X but before he ended free academic access to the platform's data, my colleagues and me searched for signs of it social bot Accounts that publish content generated by artificial intelligence. Social bots are AI software that produces content and interacts with people on social media. We uncovered a network of over a thousand bots involved in crypto scams. We called this that “fox8” botnet According to certainly one of the fake news web sites, it ought to be strengthened.

We were capable of discover these accounts since the programmers were a bit sloppy: they didn't catch occasional posts with self-disclosing text generated by ChatGPT, equivalent to when the AI ​​model refused to comply with requests that violated its terms. The most typical self-disclosing response was: “I'm sorry, but I cannot comply with this request since it violates OpenAI's content policy against generating harmful or inappropriate content. As an AI language model, my responses should all the time be respectful and appropriate for all audiences.”

We imagine fox8 was just the tip of the iceberg because higher programmers are capable of filter out self-revealing contributions or use open source AI models which can be fine-tuned to remove ethical guardrails.

The fox8 bots created mock interaction with one another and with human accounts through realistic back-and-forth discussions and retweets. This is how they brought the suggestion algorithm from

This level of coordination between fake online agents was unprecedented – AI models were weaponized to create a brand new generation of social agents that were rather more sophisticated than previous social bots. Machine learning tools to detect social bots like our own Botometerwere unable to differentiate between these AI agents and human accounts within the wild. Even AI models trained to acknowledge AI-generated content failed.

Bots within the age of generative AI

Fast forward a number of years: Today, people and organizations with malicious intent have access to more powerful AI language models – including open source models – while social media platforms have relaxed or stopped moderation efforts. They even offer financial incentives for engaging content, no matter whether it’s real or AI-generated. This is an ideal storm for foreign and domestic influence operations targeting democratic elections. For example, an AI-driven bot swarm could create the misunderstanding of widespread, bipartisan opposition to a politician.

The current US administration has dismantled Federal programs that Battle such hostile campaigns and defunded Research efforts to review them. Researcher not have access to the platform data that might make it possible to detect and monitor such online manipulation.

I’m a part of an interdisciplinary team of researchers from computer science, AI, cybersecurity, psychology, social sciences, journalism and politics who’ve raised the alarm Threat of malicious AI swarms. We imagine that current AI technology allows malicious firms to deploy large numbers of autonomous, adaptive, and coordinated agents across multiple social media platforms. These agents enable influence operations which can be much more scalable, sophisticated and adaptable than easy scripted misinformation campaigns.

Instead of generating equivalent posts or obvious spam, AI agents can generate different, credible content at scale. The Swarms can send people messages tailored to their individual preferences and the context of their online conversations. The Swarms can adjust tone, style and content to dynamically reply to human interaction and platform signals equivalent to the variety of likes or views.

Synthetic consensus

In a study my colleagues and I conducted last 12 months, we used a social media model to do that Simulate swarms of faux social media accounts Using various tactics to influence a goal online community. One tactic was by far probably the most effective: infiltration. Once a web based group is infiltrated, malicious AI swarms can create the illusion of broad public agreement with the narratives they’re designed to advertise. This exploits a psychological phenomenon generally known as social proof: Humans are naturally inclined to imagine something after they perceive that “everyone says it.”

This diagram shows the influence network of an AI swarm on Twitter (now X) in 2023. The yellow dots represent a swarm of social bots controlled by an AI model. Gray dots represent legitimate accounts that follow the AI ​​agents.
Filippo Menczer and Kai Cheng Yang, CC BY-NC-ND

Such social media Artificial turf tactics have been around for a few years, but malicious AI swarms can effectively generate credible interactions with targeted human users at scale and trick those users into following the fake accounts. For example, agents can seek advice from a sports fan in regards to the latest game and to a news junkie about current events. They can create language that reflects the interests and opinions of their audience.

Even when individual claims are debunked, the persistent chorus of independent-sounding voices could make radical ideas seem mainstream and reinforce negative feelings about “others.” Manufactured synthetic consensus is a really real threat to the public spherethe mechanisms that democratic societies use to form shared beliefs, make decisions, and trust public discourse. If residents cannot reliably distinguish between real public opinion and algorithmically generated simulations of unanimity, democratic decision-making could possibly be seriously compromised.

Mitigate the risks

Unfortunately, there isn’t a single solution. Regulation that provides researchers access to platform data could be a primary step. In order to predict risks, it might be essential to grasp how swarms behave collectively. A key challenge is recognizing coordinated behavior. Unlike easy copy-and-paste bots, malicious swarms produce diverse results that resemble normal human interaction, making detection rather more difficult.

In our laboratory we develop methods for detection Patterns of coordinated behavior that differ from normal human interaction. Even though agents look different, their underlying goals often reveal patterns in timing, network movement, and narrative progression which can be unlikely to occur naturally.

Social media platforms could use such methods. I imagine AI and social media platforms should do the identical more aggressive Adopt standards for applying watermarks to AI-generated content and recognize and label such content. Finally, restricting the monetization of inauthentic engagement would scale back the financial incentives for influence operations and other malicious groups to take advantage of synthetic consensus.

The threat is real

While these measures could mitigate the systemic risks of malicious AI swarms before they develop into entrenched in political and social systems worldwide, the present political landscape within the United States appears to be moving in the other way. The Trump administration has set a goal of reducing regulation of AI and social media, as an alternative favoring rapid adoption of AI models over security.

The threat of malicious AI swarms is not any longer theoretical: our evidence suggests that these tactics are already getting used. I imagine that policymakers and technologists should increase the price, risk and visibility of such manipulations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read