HomeIndustriesDeepMind finds political deepfakes top the list of malicious AI deployments

DeepMind finds political deepfakes top the list of malicious AI deployments

Stay up thus far with free updates

Artificial intelligence-created “deepfakes” imitating politicians and celebrities are much more common than attempts to make use of AI to assist cyberattacks, based on the primary study by Google’s DeepMind division into probably the most common malicious uses of this cutting-edge technology.

According to the study, creating realistic but fake images, videos and audio files of individuals is sort of twice as common because the second most typical misuse of generative AI tools: falsifying information using text-based tools resembling chatbots to generate misinformation and post it online.

The most typical goal of actors abusing generative AI is to shape or influence public opinion, based on the evaluation conducted jointly with search engine giant Jigsaw's research and development unit. This accounts for 27 percent of uses and fuels fears about how deepfakes could influence elections around the globe this yr.

In recent months, deepfakes of British Prime Minister Rishi Sunak and other global leaders have appeared on TikTok, X and Instagram. Next week, the UK will hold a general election.

There is widespread concern that despite social media platforms' efforts to label or remove such content, audiences may not recognize it as fake and that the spread of such content could influence voters.

Ardi Janjeva, research fellow on the Alan Turing Institute, called the study's finding “particularly relevant” that the contamination of publicly available information with AI-generated content could “distort our collective understanding of sociopolitical reality.”

Janjeva added: “While we’re uncertain concerning the impact of deepfakes on voting behavior, this bias could also be harder to detect within the short term and poses long-term risks to our democracies.”

The study, the primary of its kind conducted by DeepMind, Google's AI division led by Sir Demis Hassabis, is an try and quantify the risks related to using generative AI tools that the world's biggest technology corporations have rushed to publicize of their pursuit of big profits.

As generative products like OpenAI's ChatGPT and Google's Gemini change into more widely adopted, AI corporations are starting to observe the flood of misinformation and other potentially harmful or unethical content created by their tools.

In May, OpenAI published a study revealing that corporations linked to Russia, China, Iran and Israel used the corporate's tools to create and spread disinformation.

“There has been, understandably, a number of concern about fairly sophisticated cyberattacks enabled by these tools,” said Nahema Marchal, lead writer of the study and researcher at Google DeepMind. “However, what now we have seen are fairly common misuses of GenAI (resembling deepfakes) which will fly a bit higher under the radar.”

Researchers from Google DeepMind and Jigsaw analyzed around 200 observed incidents of abuse from the social media platforms X and Reddit, in addition to from online blogs and media reports about abuse, between January 2023 and March 2024.

Bar chart of the most common misuses of new artificial intelligence tools, showing the misuse of generative AI

The second most typical motive for abuse was getting cash, which involved offering services to create deepfakes, including generating naked representations of real people, or using generative AI to create large amounts of content, resembling fake news articles.

The investigation found that almost all incidents involve easily accessible tools “that require minimal technical expertise,” meaning more malicious actors can abuse generative AI.

Google DeepMind's research will influence how the corporate improves its assessments of model safety, and hopes it can also influence the way in which its competitors and other stakeholders view how “harm manifests.”

Video: AI: blessing or curse for humanity? | FT Tech

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read