HomeFeaturesDeep fakes in politics – when seeing is not any longer believing

Deep fakes in politics – when seeing is not any longer believing

We have well and truly entered an age where you can not trust what you see online. 

While that statement has been partially true for many years, AI has elevated content manipulation to recent levels, massively outpacing public awareness.

AI deep fake technology can create and alter images, videos, and audio recordings, putting words into the mouths of public figures or making them appear in situations that never occurred.

In many cases, it takes greater than a second glance to find out the authenticity of content, and pretend media can accumulate tens of millions of impressions before being identified.

We’re now witnessing deep fakes that may potentially interrupt democratic processes, though it’s too early to measure tangible impacts on voting behavior.

Let’s examine a number of the most notable political AI deep fake incidents we’ve witnessed up to now.

Joe Biden New Hampshire incident

January 2024, A New Hampshire, US, a robocall mimicking Biden’s voice encouraged voters to “save your vote for the November election,” wrongly suggesting that participating in the first would inadvertently profit Donald Trump. 

It was attributed to Kathy Sullivan, a former state Democratic Party chair’s personal cellphone number. Sullivan condemned the act as a blatant type of election interference and private harassment. 

The New Hampshire Attorney General’s office said this was an illegal effort to disrupt the presidential primary and suppress voter turnout. 

The fabricated audio was identified to have been generated using ElevenLabs, an industry leader in speech synthesis.

ElevenLabs later suspended the offender behind the fake Biden voice and said, “We are dedicated to stopping the misuse of audio AI tools and take any incidents of misuse extremely seriously.”

German Chancellor Olaf Scholz deep fake incident

November 2023, Germany witnessed an AI deep fake falsely depicting Chancellor Olaf Scholz endorsing a ban on the far-right Alternative for Germany (AfD) party. 

This deep fake video was a part of a campaign by an art-activism group, the Center for Political Beauty (CPB), and goals to attract attention to the rising influence of AfD. Critiques of the AfD are foreshadowed against Germany’s Nineteen Thirties history. 

Led by philosopher and artist Philipp Ruch, the CPB group goals to create “political poetry” and “moral beauty,” addressing key contemporary issues like human rights violations, dictatorship, and genocide​​.

The CPB has engaged in quite a few controversial projects, resembling the “Search for Us” installation near the Bundestag, which they claimed contained soil from former death camps and stays of Holocaust victims. 

While support for the AfD has grown, quite a few protests across Germany display a powerful opposition to AfD’s ideologies.

A spokesperson for the group behind the deep fake stated, “In our eyes, the right-wing extremism in Germany that sits in parliament is more dangerous.”

AfD officials called the deep fake campaign a deceptive tactic geared toward discrediting the party and influencing public opinion. 

UK Prime Minster Rishi Sunak implicated in scams

In January 2024, a UK research company found that PM Rishi Sunak was involved in over 100 deceptive video adverts disseminated totally on Facebook, reaching an estimated 400,000 individuals. 

These ads, originating from various countries, including the US, Turkey, Malaysia, and the Philippines, promoted fraudulent investment schemes falsely related to high-profile figures like Elon Musk.

The research, conducted by the web communications company Fenimore Harper, highlighted how social media corporations simply aren’t responding to this way of content in an affordable timeframe. 

One deep fake ad pulled users into this fake BBC news page promoting a scam investment. Source: Fenimore Harper.

Marcus Beard, the founding father of Fenimore Harper, explained how AI democratises misinformation: “With the appearance of low-cost, easy-to-use voice and face cloning, it takes little or no knowledge and expertise to make use of an individual’s likeness for malicious purposes.”

Beard also criticized the inadequacy of content moderation on social media, noting, “These adverts are against several of Facebook’s promoting policies. However, only a few of the ads we encountered appear to have been removed.”

The UK government responded to the chance of fraudulent deep fakes: “We are working extensively across government to make sure we’re able to rapidly reply to any threats to our democratic processes through our defending democracy taskforce and dedicated government teams.” 

Pakistan Prime Minster Imran Khan appears in virtual rally

In December 2023, the previous Prime Minister of Pakistan, Imran Khan, currently imprisoned on charges of leaking state secrets, appeared at a virtual rally using AI.

Despite being behind bars, Khan’s digital avatar was remarkably watched by tens of millions. The rally contained footage from past speeches involving his political party, Pakistan Tehreek-e-Insaaf (PTI).

Khan’s four-minute speech spoke of resilience and defiance against the political repression faced by PTI members. 

The AI voice articulated: “Our party isn’t allowed to carry public rallies. Our individuals are being kidnapped and their families are being harassed,” continuing, “History will remember your sacrifices.” 

Confusing the situation, the Pakistan government allegedly tried to dam access to the rally.

NetBlocks, a web monitoring organization, stated, “Metrics show major social media platforms were restricted in Pakistan for [nearly] 7 hours on Sunday evening during a web based political gathering; the incident is consistent with previous instances of web censorship targeting opposition leader Imran Khan and his party PTI.”

ℹ️ ICYMI: Metrics show major social media platforms were restricted in #Pakistan for ~7 hours on Sunday evening during a web based political gathering; the incident is consistent with previous instances of web censorship targeting opposition leader Imran Khan and his party PTI https://t.co/AS9SdfwqoH pic.twitter.com/XXMYBhknXd

Usama Khilji, a proponent of free speech in Pakistan, commented, “With a full crackdown on PTI’s right to freedom of association and speech via arrests of leadership, the party’s use of artificial intelligence to broadcast a virtual speech within the words of its incarcerated chairman and former Prime Minister Imran Khan marks a brand new point in the usage of technology in Pakistani politics.” 

Fake audio of ex-Sudanese president Omar al-Bashir on TikTok

An AI-powered campaign on TikTok exploited the voice of former Sudanese President Omar al-Bashir amid the country’s ongoing civil turmoil. 

Since late August 2023, an anonymous account posted what it claims to be “leaked recordings” of al-Bashir. However, analysts determined the recordings were AI-generated fakes.

al-Bashir’ has been absent from the general public eye since he was ousted from the country in 2019 as a result of serious war crime allegations. 

Slovakia’s election day audio scam

On the day of Slovakia’s election, a controversial audio featured deep fakes of Michal Šimečka, leader of the Progressive Slovakia party, and journalist Monika Tódová discussing corrupt practices like vote-buying. 

This surfaced inside Slovakia’s pre-election media blackout, so the implicated individuals couldn’t easily publicly refute their involvement before post-time. 

Both implicated parties later denounced the recording as a deep fake, which a fact-checking agency confirmed. 

Volodymyr Zelenskiy’s deep fake

In 2023, a deep fake video of Ukrainian President Volodymyr Zelenskiy, which amateurishly suggested he was calling soldiers to desert their posts, was quickly identified as fake and removed by major social media platforms.

Turkish election deep fake drama

In the lead-up to Turkey’s parliamentary and presidential elections, a video falsely showing President Recep Tayyip Erdoğan’s essential challenger, Kemal Kılıçdaroğlu, receiving support from the PKK was spread online. 

Donald Trump deep fakes

In early 2023, we witnessed realistic-looking deep fakes of Donald Trump being arrested and a campaign video by Ron DeSantis featuring AI-generated images of Trump embracing Anthony Fauci. 

Trump AIAI images of Trump being arrested caused a raucous back in March 2023.

Belgian political party’s Trump deep fake

An earlier incident in 2018 in Belgium caused political uproar when a deep fake video created by a political party caused a public outcry.

The video falsely depicted President Donald Trump advising Belgium to withdraw from the Paris climate agreement. 

The video was a part of a high-tech forgery, which was later acknowledged by the party’s media team. It demonstrated how deep fakes could be used to fabricate statements by world leaders to influence public opinion and policy.

Deep fake of Nancy Pelosi

A manipulated video of Nancy Pelosi in 2020, made to look as if she was slurring her words and intoxicated, spread rapidly on social media. 

This demonstrated the potential of deep fakes to discredit and embarrass public figures, which frequently persists after the content is deemed fake. 

Audio deepfake of Keir Starmer

Another incident in British politics involved an audio clip allegedly capturing opposition leader Sir Keir Starmer swearing at his staff. 

The clip, widely circulated on social media, was later revealed to be an AI-generated deep fake.

As tech corporations explore ways to tackle deep fakes at scale, the AI models used to create fake media will only turn into more sophisticated and easier to make use of.

The journey ahead demands a collaborative effort amongst technologists, policymakers, and the general public to harness AI’s advantages while safeguarding our society’s pillars of trust and integrity.

Trust in politics and public institutions is already flimsy, to say the least. Deep fakes will further undermine it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read