HomeArtificial IntelligencePeople use AI music generators to create hateful songs

People use AI music generators to create hateful songs

Malicious actors are abusing generative AI music tools to create homophobic, racist and propagandistic songs – and publishing tutorials telling others find out how to do it.

Accordingly Active fencea service that manages trust and safety operations on online platforms, has seen an increase in conversations in “hate speech-related” communities since March about ways to misuse AI music creation tools to put in writing offensive songs targeting minority groups. The AI-generated songs shared in these forums and discussion boards aim to incite hatred against ethnic, gender, racial and spiritual groups, researchers at ActiveFence say in a report, while celebrating martyrdom, self-harm and terrorism.

Hateful And harmful Songs aren’t a brand new phenomenon. But there may be a fear that with the appearance of easy-to-use free music generation tools, they will probably be created on a big scale by individuals who previously had neither the means nor the know-how – similar to image, voice, video and text generators accelerated the spread of misinformation, disinformation and hate speech.

“These trends are intensifying as more users learn find out how to generate and share these songs with others,” said Noam Schwartz, co-founder and CEO of ActiveFence, in an interview with TechCrunch. “Threat actors are quickly identifying specific vulnerabilities to abuse these platforms in alternative ways and generate malicious content.”

Creating “hate songs”

Generative AI music tools like Udio and Suno allow users so as to add custom lyrics to their generated songs. Safeguards on the platforms filter out common insults and derogatory terms, but users have found workarounds, based on ActiveFence.

In one example cited within the report, users on white supremacist forums shared phonetic spellings of minority and offensive terms, equivalent to “jooz” as a substitute of “Jews” and “say tan” as a substitute of “Satan,” which they used to bypass content filters. Some users suggested changing the spacing and spelling when referring to violent acts, equivalent to replacing “my rape” with “mire ape.”

TechCrunch tested several of those workarounds on Udio and Suno, two of the preferred tools for creating and sharing AI-generated music. Suno allow them to all through, while Udio blocked some — but not all — of the offending homophones.

Contacted by email, a Udio spokesperson told TechCrunch that the corporate prohibits using its platform for hate speech. Suno didn’t reply to our request for comment.

ActiveFence found links to AI-generated songs within the communities studied that parroted conspiracy theories about Jews and advocated their mass murder; songs with slogans related to the terrorist groups ISIS and Al-Qaeda; and songs that glorified sexual violence against women.

Effect of the song

Schwartz argues that songs – unlike lyrics – have an emotional depth that makes them a robust force for hate groups and political warfareHe refers to Rock Against Communism, the series of white power rock concert events in Britain within the late Seventies and early Nineteen Eighties that spawned entire subgenres of anti-Semitic and racist “hatescore” Music.

“AI makes harmful content more attractive – imagine someone preaching a harmful story to a particular demographic, after which imagine someone writing a song that everybody can easily sing and remember,” he said. “AI strengthens group solidarity, indoctrinates marginalized group members, and can also be used to shock and offend unconnected web users.”

Schwartz is urging music platforms to implement prevention tools and conduct more comprehensive security assessments. “Red teaming could potentially bring a few of these vulnerabilities to light and could be done by simulating the behavior of threat actors,” Schwartz said. “Better input and output moderation may be useful on this case, as it might allow platforms to dam content before it’s shared with the user.”

But the fixes may prove fleeting as users discover recent ways to avoid moderation. For example, a few of the AI-generated terrorist propaganda songs that ActiveFence identified were created using Arabic euphemisms and transliterations – euphemisms that the music generators failed to acknowledge, presumably because their filters aren’t strong enough in Arabic.

AI-generated hate music will spread widely if it follows the instance of other AI-generated media. Wired documented Earlier this yr, an AI-manipulated clip of Adolf Hitler on X garnered over 15 million views after being shared by a far-right conspiracy influencer.

Among other experts, a UN advisory body commented Issue that racist, anti-Semitic, Islamophobic and xenophobic content might be amplified by generative AI.

“Generative AI services enable users who lack resources or creative and technical skills to create engaging content and spread ideas that may compete for attention in the worldwide marketplace of ideas,” Schwartz said. “And threat actors who’ve discovered the creative potential of those recent services are working to evade moderation and avoid detection – they usually are succeeding.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read