HomeEthics & SocietyDeepMind study exposes deep fakes as leading type of AI misuse

DeepMind study exposes deep fakes as leading type of AI misuse

AI has a myriad of uses, but considered one of its most concerning applications is the creation of deep fake media and misinformation.

A brand new study from Google DeepMind and Jigsaw, a Google technology incubator that monitors societal threats, analyzed misuse of AI between January 2023 and March 2024.

It assessed some 200 real-world incidents of AI misuse, revealing that creating and disseminating deceptive deep fake media, particularly those targeting politicians and public figures, is essentially the most common type of malicious AI use.

Deep fakes, synthetic media generated using AI algorithms to create highly realistic but fake images, videos, and audio, have turn into more lifelike and pervasive. 

Incidents like when explicit fake images of Taylor Swift appeared on X showed that such images can reach tens of millions of individuals before deletion. 

But most insidious are deep fakes targeted at political issues, comparable to the Israel-Palestine conflict. In some cases, not even the actual fact checkers charged with labeling them as “AI-generated” can reliably detect their authenticity. 

The DeepMind study collected data from a various array of sources, including social media platforms like X and Reddit, online blogs, and media reports. 

Each incident was analyzed to find out the precise kind of AI technology misused, the intended purpose behind the abuse, and the extent of technical expertise required to perform the malicious activity.

Deep fakes are the dominant type of AI misuse

The findings paint an alarming picture of the present landscape of malicious AI use:

  1. Deep fakes emerged because the dominant type of AI misuse, accounting for nearly twice as many incidents as the following most prevalent category.
  2. The second most steadily observed kind of AI abuse was using language models and chatbots to generate and disseminate disinformation online. By automating the creation of misleading content, bad actors can flood social media and other platforms with fake news and propaganda at an unprecedented scale.
  3. Influencing public opinion and political narratives was the first motivation behind over 1 / 4 (27%) of the AI misuse cases analyzed. This finding underscores the grave threat that deepfakes and AI-generated disinformation pose to democratic processes and the integrity of elections worldwide.
  4. Financial gain was identified because the second most typical driver of malicious AI activity, with unscrupulous actors offering paid services for creating deep fakes, including non-consensual explicit imagery, and leveraging generative AI to mass-produce fake content for profit.
  5. The majority of AI misuse incidents involved readily accessible tools and services that required minimal technical expertise to operate. This low barrier to entry greatly expands the pool of potential malicious actors, making it easier than ever for people and groups to have interaction in AI-powered deception and manipulation.

Mapping AI misuse to intent. Source: DeepMind.

Nahema Marchal, the study’s lead creator and a DeepMind researcher, explained the evolving landscape of AI misuse to the Financial Times: “There had been a number of comprehensible concern around quite sophisticated cyber attacks facilitated by these tools,” continuing, “We saw were fairly common misuses of GenAI [such as deep fakes that] might go under the radar a bit of bit more.”

Policymakers, technology firms, and researchers must work together to develop comprehensive strategies for detecting and countering deepfakes, AI-generated disinformation, and other types of AI misuse.

But the reality is, they’ve already tried – and largely failed. Just recently, we’ve observed more incidents of children getting caught up in deep fake incidents, showing that the societal harm they inflict will be grave. 

Currently, tech firms can’t reliably detect deep fakes at scale, and so they’ll only grow more realistic and tougher to detect in time. 

And once text-to-video systems like OpenAI’s Sora land, there’ll be a complete recent dimension of deep fakes to handle. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read