HomeOpinions2024 sees anger rise over corporate misuse of AI: what’s next?

2024 sees anger rise over corporate misuse of AI: what’s next?

January 2024 began with talks of Midjourney, a number one force within the AI image-generation world, using the names and sorts of over 16,000 artists without their consent to coach its image-generation models. 

You can view the artist database under Exhibit J of a lawsuit submitted against Midjourney, Stability AI, and DeviantArt.

Within the identical week of that disclosure, cognitive scientist Dr. Gary Marcus and concept artist Reid Southen released an evaluation in IEEE titled “Generative AI Has a Visual Plagiarism Problem.”

They conducted a series of experiments with the AI models Midjourney and DALL-E 3 to explore their ability to generate images that may infringe on copyrighted material. 

By prompting Midjourney and DALL-E 3 with prompts intentionally chosen to be temporary and related to industrial movies, characters, and recognizable settings, Marcus and Southen revealed these models’ incredible ability to provide blatantly copyrighted content. 

They used prompts related to specific movies, equivalent to “Avengers: Infinity War,” without directly naming the characters. This was to check whether the AI would generate images closely resembling the copyrighted material just from contextual cues. 

Remarkably, Midjourney includes copyrighted characters based on easy prompts like “animated toys” prompt. Source: IEEE

Cartoons were covered too – they experimented with generating images of “The Simpsons” characters, using prompts that led the AI models to provide distinctly recognizable images from the show. 

Finally, Marcus and Southen tested prompts that don’t allude to copyright material in any respect, displaying Midjourney’s ability to recall copyright images even once they’re not specifically requested. 

Midjourney is recreating unlicensed IP en masse and sometimes nearly verbatim from even non-specific prompts, whilst making the most of subscriptions. MJ users don’t should sell the photographs for copyright infringement to have potentially occurred, MJ already profits from its creation. pic.twitter.com/Ax3tQWq3pt

This was greater than a technical exposé – it touches on the raw nerves of artistic communities worldwide. 

Art, in any case, isn’t such as data. It’s the culmination of lifetimes of emotional investment, personal exploration, and painstaking craft. 

Marcus and Southen’s study was about to turn out to be a part of a protracted debate extending into copyright, mental property, AI monetization, and the company use of generative AI.

Companies use AI-generated work, and observers don’t ignore it

One of generative AI’s marketing taglines for business adoption is “efficiency” or derivatives thereof.

Whether businesses use technology to avoid wasting time, lower your expenses, or solve problems, we’ve known for some time now that AI ‘efficiency’ comes at some risk of displacing human skills or replacing jobs.

Companies are sometimes encouraged to see this as a chance. To replace a human with AI is commonly viewed as a strategic selection.

However, to see this trade-off between humans and machines so linearly can prove a grave error, as the next events reveal quite candidly.

People aren’t willing to let instances of corporate AI misuse fly once they have the chance to confront it.

ID@Xbox

Xbox later removed the post but didn’t follow up on it otherwise. 

In case anyone is keeping count… Xbox AND Game Informer each have used or promoted generative AI relatively recently. https://t.co/cOvkU3WXQ8 pic.twitter.com/2d5oeVTCLN

Game Informer, as you may see above, also posted a poor-quality AI-generated image of Master Chief from Halo.

Magic: The Gathering

Fantasy trading card game Magic: The Gathering conjured a storm of criticism once they posted a partially AI-generated image of a brand new card release. It was the background specifically that was AI-generated, as evidenced by distorted lines and curves. 

MTG initially rejected observers’ criticisms, which picked up pace throughout the week. The situation was worsened by the actual fact the corporate had previously released a press release opposing the usage of AI of their ‘most important products.’

This was a promotional social media image, so it didn’t break that promise, but it surely was MTG’s initial flat denial that got the blood pumping for a lot of.

“created by humans” Right… pic.twitter.com/gf9TUXWSPA

I hate we live within the timeline where we have now to fact-check art. pic.twitter.com/9D6V6ZXswW

Later within the week, MTG conceded defeat to the hordes of observers, telling them this image was certifiably AI-generated. 

The statement began, “Well, we made a mistake earlier once we said that a marketing image we posted was not created using AI. Read on for more” and explained how a designer likely used an AI tool like Firefly, integrated into Photoshop, or one other AI-powered graphic design tool slightly than merely generating the complete image with Midjourney or similar.

Well, we made a mistake earlier once we said that a marketing image we posted was not created using AI. Read on for more. (1/5)

An element of this debate was that MTG probably only used AI to generate the image background.

If Adobe Firefly was used for this, which seems possible, then Adobe is bullish about their ethically and legally sound use of information, though that’s debated. 

Perhaps it’s not the worst offense amongst other contenders from this week, speaking of which…

Wacom

One of the most important errors of the week was undoubtedly Wacom, which manufactures drawing tablets for artists and illustrators. 

Shockingly, for a brand founded on helping artists create digital art, Wacom used an AI-generated image to advertise a reduction coupon. 

Again, users identified the AI origins of the image from distortions characteristic of the technology, equivalent to the text to the underside left of the image. Observers later found the dragon in Abobe Stock Images. 

The response was brutal, with X users pointedly humiliating the brand and suggesting that users boycott their products. 

Because Wacom deleted their post.
Posting for web historical preservation sake. https://t.co/WEZex5GbG9 pic.twitter.com/chiR2pOczB

Wacom apologized, but their try and pass off responsibility to a 3rd party wasn’t viewed sympathetically.

A message from the Wacom Team: pic.twitter.com/u06PNCvmhU

League of Legends

League of Legends was one other brand to be felled by the distasteful use of AI-generated art. 

While perhaps a more contentious or borderline example, there may be definitely evidence of AI, as observed in some awkwardly shaped components and body parts. 

In the past few days, Wizards of the Coast was caught using AI on ad campaign pieces after saying they wouldn’t. Wacom got caught as well and deleted, which is crazy considering their products, and appears like Apex Legends too. Jobs are getting into real time, makes me nauseous. pic.twitter.com/EGBA1INMPZ

A reckoning for AI firms?

2024 has seen a continuation of lawsuits, with authors Nicholas Basbanes and Nicholas Gage filing a criticism asserting OpenAI and Microsoft unlawfully leveraged their written works, the newest because the December New York Times lawsuit. 

The NYT’s lawsuit, particularly, could have monumental consequences for the AI sector. 

Alex Connock, a senior fellow at Oxford University’s Saïd Business School, emphasized the potential impact, stating, “If the Times were to win the case, it could possibly be catastrophic for the complete AI industry.” 

He elaborated on the implications, noting that “a loss on the principle that fair dealing could enable learning from third-party materials could be a blow to the complete industry.”

Dr. Gary Marcus, involved within the Midjourney IEEE study, also dubbed 2024 the ‘yr of the AI lawsuit,’ and there are questions on whether this, combined with regulation and potential hardware shortages, could signal an ‘AI winter,’ where the industry’s fervor for development cools.

2024 is 𝙙𝙚𝙛𝙞𝙣𝙞𝙩𝙚𝙡𝙮 going to be the yr of the lawsuit in GenAI.

If you wish to know why, and why GenAI will probably lose a whole lot of those suits or be forced to settle, try the previous couple of posts at my (free) 𝖲𝗎𝖻𝗌𝗍𝖺𝖼𝗄, Marcus on AI. https://t.co/cO4bqKkbsa

Connock also speculated on the broader repercussions of this deluge of lawsuits, explaining, “If OpenAI were to lose the case, it might open up the chance for all other content makers who consider their content has been crawled (which is essentially everyone) and produce damage on an industrywide scale.”

Connock theorizes, “What will almost inevitably occur is that the NY Times will settle, having extracted a greater monetization deal to be used of its content.”

The realization of any chinks within the AI industry’s armor could be huge, each for big firms just like the NYT and independent creators. 

As James Grimmelmann, a professor of digital and data law at Cornell, stated, “Copyright owners have been lining as much as take whacks at generative AI like a large piñata woven out of their works. 2024 is prone to be the yr we discover out whether there may be money inside.”

So, how strong is the industry’s defense? Thus far, AI developers are clinging to their ‘fair use’ arguments while gaining protection from the actual fact hottest datasets were created by entities aside from themselves, which obscures their culpability.

Tech firms are adept at fighting off legal liabilities standing in the way in which of R&D. And let’s not forget that AI presents opportunities for governments looking for out ‘efficiency’ and other advantages, which softens their resistance.

The UK government, for example, even explored a copyright exception for AI firms, something they U-turned on after huge resistance and a parliamentary committee.

In terms of strategy, in a discussion with the LA Times, William Fitzgerald, a partner on the Worker Agency and former Google public policy team member, said big tech would begin a powerful lobbying campaign, perhaps modeled on tactics previously utilized by tech giants like Google.

This would involve a mix of legal defense, public relations campaigns, and lobbying efforts, tactics which were particularly visible in past high-profile cases just like the battle over the Stop Online Piracy Act (SOPA) and Google Books litigation.

Fitzgerald observes that OpenAI appears to be following an identical path to Google, not only of their approach to handling copyright complaints but in addition of their hiring practices.

He points out, “It appears OpenAI is replicating Google’s lobbying playbook. They’ve hired former Google advocates to affect the identical playbook that’s been so successful for Google for a long time now.”

Fitzgerald’s evaluation implies that the AI industry, like other tech sectors before it, may depend on powerful lobbying efforts and strategic public policy maneuvers to shape the legal landscape of their favor.

How this pans out is not possible to predict. But you may make certain big tech is able to grind things out until the bitter end. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read