HomeNewsThe culture of computer science often implies that anyone's data is allowed...

The culture of computer science often implies that anyone's data is allowed to feed the AI ​​algorithm – but artists fight back

Content created using generative AI is popping up in every single place, causing concern for some artists and content creators. They fear that their mental property might be in danger from generative AI tools created by scraping the web for data and pictures, no matter whether or not they were authorized to achieve this.

Now some artists and content creators are attempting recent ways to sabotage the AI ​​to forestall it from scrapping their work, through so-called data poisoning.

In this episode of The Conversation Weekly podcast, we confer with a pc scientist who explains how data poisoning works, what impact it could have, and why he thinks the issue it's attempting to combat is an element of a bigger ethical problem at the guts of the pc is science.

Dan Angus enjoys fiddling with generative AI. A trained computer scientist, he’s now a professor of digital communications on the Queensland University of Technology in Australia and thinks loads about AI, automation and their impact on society. He worries about what recent generative AI tools mean for developers.

We need to pay attention to how they will invade mental property and your complete financial ecosystem that supports art and artists.

Quite a lot of Copyright Infringement Cases In recent years there have been plenty of artists accusing major technology firms of stealing their work.

When Angus spoke to The Conversation Weekly, he prompted a well-liked AI text-to-image generator to create a series of images – of an individual riding an area bull in a Martian environment, within the sort of Van Gogh. The resulting images are recognizable, if quite crazy.

Images created via a prompt to Midjourney.
Screenshot from The Conversation., Provided by writer (no reuse)

However, if the image generator had been created with “poisoned” data, the photographs it produced could be even stranger. The bull might be replaced with a horse, for instance, or it wouldn't appear like a Martian environment in any respect.

Angus explained that an artist who poisons their data in this manner might insert a small pixel into the digital image that might be invisible to the naked eye but would interfere with generative AI. It could “completely skew the training of the model in certain directions,” he says, adding that “it doesn't take a number of it to get right into a system and wreak havoc.”

One such tool is known as Nightshade was released in January 2024 by a team on the University of Chicago, who told The Conversation that it was downloaded 250,000 times in its first week of release. Other tools for audio or video creation are also available.

Angus doesn't imagine data poisoning in this manner could have much of an impact on the preferred generative AI firms, largely due to their limited scope. But he fears that a culture in computer science that focuses more on ends than means implies that mental property rights are sometimes ignored.

This creates a certain attitude towards data that claims the info found is your individual data. If you discover it online, for those who can download it, it's fair game and you should utilize it to coach an algorithm, and that's okay since the ends normally justify the means.

In his opinion, this “really deep cultural problem” in the way in which computer scientists and developers handle data and generate datasets may lead to larger problems down the road.

Listen to the total interview with Dan Angus The Conversation Weekly Podcastwhich also features Eric Smalley, science and technology editor at The Conversation within the US.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read