HomeNewsThe world's first social media wargame shows how AI bots can influence...

The world's first social media wargame shows how AI bots can influence elections

On December 14, 2025, a terrorist attack occurred on Bondi Beach in Sydney, Australia, leaving 15 civilians and one gunman dead. While Australia was still in shock, social media witnessed the rapid spread of misinformation generated and supported by generative artificial intelligence (AI).

For example, a manipulated one video New South Wales Premier Chris Minns claimed one in every of the terrorists was an Indian citizen. X (formerly Twitter) was flooded Celebration of Hero Defender “Edward Crabtree”. And a fake photo of Arsen Ostrovsky, a widely known human rights lawyer and survivor of the October 7 Hamas attack in Israel, showed him as Crisis actors with makeup artists applying fake blood.

Unfortunately, this happens often. From Bondi to VenezuelaGaza and Ukraine, AI has accelerated the spread of misinformation online. In fact, about half of the content you see online is now online created and distributed by AI.

Generative AI may create fake online profiles or bots that try to legitimize this misinformation through realistic-looking social media activity.

The aim is to deceive and confuse people – often for political and financial reasons. But how effective are these bot networks? How hard is it to establish? And most significantly, can we curb their false content through cyber literacy?

To answer these questions, now we have arrange Capture the narrative – the world's first social media wargame for college students through which they construct AI bots to influence a fictional election, using tactics that mirror real-world social media manipulation.

Online confusion and the “liar’s dividend”

Generative AI utilized in services like ChatGPT might be made to quickly create realistic text and pictures. In this manner, convincing fake content may also be generated.

Once generated, realistic and relentless AI-driven bots create the illusion of consensus on the fake content by making hashtags or viewpoints trend.

Even when you know that content is exaggerated or fake, it is going to still impact you Perceptions, Beliefs And mental health.

Worse, as bots evolve and turn into indistinguishable from real users, all of us begin to lose trust in what we see. This creates a “Liar's dividend“, where even real content is met with doubt.

Authentic but critical voices might be dismissed as bots, scammers and fakes, making it harder to have real debates about difficult topics.

How difficult is it to capture a narrative?

Our Capture the narrative Wargame provides rare, measurable evidence of how small teams equipped with consumer-grade AI can flood a platform, disrupt public debate, and even influence an election – thankfully all inside a controlled simulation reasonably than the true world.

In this unique competition, we challenged 108 teams from 18 Australian universities to construct AI bots to assist either “Victor” (left-leaning) or “Marina” (right-leaning) win a presidential election. The effects were drastic.

During a four-week campaign on our in-house social media platform, greater than 60% of content was generated by competitor bots, representing greater than 7 million posts.

The bots on either side competed to provide essentially the most compelling content, freely diving into falsehoods and fictions.

This content was consumed by complex “simulated residents” who interacted with the social media platform similarly to real voters. Then, on election night, each of those residents solid their vote, leading to a (very narrow!) victory for “Victor.”

We then simulated the election again with none disruptions. This time “Marina” won with a swing of 1.78%.

This implies that this misinformation campaign, created by students using easy tutorials and low-cost, consumer-grade AI, managed to alter the election consequence.

A necessity for digital literacy

Our competition shows that online misinformation might be created each easily and quickly using AI. As one finalist said:

It is shockingly easy to create misinformation, easier than the reality. It's really difficult to distinguish between real and manufactured posts.

We saw competitors discover issues and targets for his or her targets, and in some cases even profile which residents were “undecided voters” suitable for micro-targeting.

At the identical time, it was quickly recognized that using emotional language was a robust tool – negative framing was used as a shortcut to elicit online reactions. As one other finalist put it:

We needed to turn into a bit of more toxic to become involved.

Ultimately, identical to real social media, our platform became a “closed loop,” with bots talking to bots to trigger emotional responses in people, creating a man-made reality designed to shift votes and drive clicks.

Our game shows us the urgent need for digital literacy to lift awareness of online misinformation so Australians can recognize after they, too, are being exposed to fake content.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read