HomeNewsThis week in AI: Generative AI and the issue of creator compensation

This week in AI: Generative AI and the issue of creator compensation

Keeping up with an industry as fast-moving as AI is a significant challenge. Until an AI can do it for you, here's a handy roundup of the most recent stories from the world of machine learning, in addition to notable research and experiments that we haven't covered alone.

By the best way – TechCrunch plans to publish an AI newsletter soon. Stay tuned.

This week in AI, eight outstanding U.S. newspapers owned by investment giant Alden Global Capital, including the New York Daily News, Chicago Tribune and Orlando Sentinel, sued OpenAI and Microsoft for copyright infringement related to the businesses' use of generative AI technology. Like the New York Times in its ongoing lawsuit against OpenAI, they accuse OpenAI and Microsoft of stealing their mental property without permission or compensation to develop and commercialize generative models like GPT-4.

“We have spent billions of dollars collecting information and reporting news in our publications, and we cannot allow OpenAI and Microsoft to expand the large tech playbook of stealing our work to construct their very own businesses at our expense “Frank Pine, the editor in chief who oversees Alden’s newspapers, said in a press release.

Given OpenAI's existing partnerships with publishers and reluctance to make its entire business model depending on them, the lawsuit seems more likely to end with a settlement and licensing agreement Fair use argument. But what concerning the remainder of the content creators whose works are put into model training without payment?

Apparently OpenAI is occupied with it.

A recently published study Paper Co-author Boaz Barak, a scientist on OpenAI's Superalignment team, proposes a framework to compensate copyright holders “proportionally to their contributions to the creation of AI-generated content.” How? Through Cooperative game theory.

The framework evaluates the extent to which the content of a training data set – e.g. B. Text, images or other data – influences the generation of a model using a game theory concept referred to as Shapley value. Then, based on this assessment, the “rightful share” (i.e. compensation) of the content owners is decided.

Let's say you might have an image-generating model that was trained on artwork from 4 artists: John, Jacob, Jack, and Jebediah. They ask him to attract a flower in Jack's style. The framework permits you to determine the influence of every artist's works on the art produced by the model and subsequently the compensation each should receive.

However, the framework has one drawback: it’s computationally intensive. Researchers' workarounds are based on compensation estimates quite than precise calculations. Would this satisfy content creators? I'm undecided. If OpenAI puts it into practice in the future, we will definitely discover.

Here are another notable AI stories from recent days:

  • Microsoft reiterates ban on facial recognition: Language has been added to the terms of service for Azure OpenAI Service, Microsoft's fully managed wrapper for OpenAI technology, that more clearly prohibits integrations from getting used “by or for” police departments for facial recognition within the United States
  • The Nature of AI Native Startups: AI startups face different challenges than a typical software-as-a-service company. That was the message from Rudina Seseri, founder and managing partner of Glasswing Ventures, last week on the TechCrunch Early Stage Event in Boston; Ron has the entire story.
  • Anthropic presents a marketing strategy: AI startup Anthropic is launching a brand new paid plan for businesses and a brand new iOS app. Team – the enterprise plan – provides customers with higher priority access to Anthropic's Claude 3 family of generative AI models, in addition to additional administrative and user management controls.
  • CodeWhisperer not: Amazon CodeWhisperer is now Q developera component of Amazon's Q family of business-focused generative AI chatbots. Available on AWS, Q ​​Developer helps developers with a number of the tasks they do of their each day work, equivalent to debugging and updating apps, much like CodeWhisperer.
  • Just leave Sam's Club: Walmart-owned Sam's Club says it's counting on AI to speed up its “exit technology.” Instead of requiring store staff to envision members' purchases against their receipts as they leave a store, Sam's Club customers who pay either at a checkout or through the Scan & Go mobile app can now leave certain store locations without their Purchases must be checked again.
  • Fish harvest, automated: Harvesting fish is an inherently messy affair. Shinkei is working to enhance it with an automatic system that ships fish more humanely and reliably, which may lead to a totally different seafood economy, Devin reports.
  • Yelp's AI Assistant: Yelp this week announced a brand new AI-powered chatbot for consumers — based on OpenAI models, based on the corporate — that can help them connect with relevant businesses for his or her tasks (e.g. installing lighting, upgrading outdoor areas, etc.). The company is rolling out the AI ​​assistant in its iOS app under the Projects tab and plans to expand it to Android later this 12 months.

More machine learning

Photo credit: US Department of Energy

Sounds like there may be Quite a celebration at Argonne National Lab This winter, they brought together 100 experts from the AI ​​and energy sectors to discuss how the rapidly evolving technology could help the country's infrastructure and research and development on this area. The resulting report is kind of what you’ll expect from this audience: lots of futuristic stuff, but still informative.

Looking at nuclear energy, the grid, carbon management, energy storage and materials, the themes that emerged from this meeting were, first, that researchers need access to powerful computational tools and resources; second, learn to acknowledge the weaknesses of the simulations and predictions (including those enabled by the very first thing); Third, there may be a necessity for AI tools that may integrate and access data from multiple sources and in lots of formats. We've seen all of these items in other ways throughout the industry, so it's not an enormous surprise, but nothing gets done on the federal level and not using a few geniuses putting out an article, so it's good to have it on the record.

Georgia Tech and Meta are partially working on this with a significant recent database called OpenDAC, a stack of reactions, materials and calculations designed to assist scientists make carbon capture processes easier. The focus is on metal-organic frameworks, a promising and popular material type for carbon capture, but one which has hundreds of variations which have not yet been widely tested.

The Georgia Tech team partnered with Oak Ridge National Lab and Metas FAIR to simulate quantum chemical interactions on these materials. Around 400 million computing hours were required – excess of a university can easily manage. Hopefully it would help climate scientists working on this field. It's all documented here.

We hear rather a lot about AI applications within the medical field, although most have a type of advisory role, helping experts notice things they may not otherwise have seen or recognize patterns that may have taken a technician hours to discover to seek out. This is partly because these machine learning models simply find connections between statistics without understanding what caused what or led to what. Cambridge and Ludwig Maximilian University of Munich researchers This is what we’re working on since it could possibly be extremely helpful in overcoming basic correlational relationships when creating treatment plans.

The aim of the work, led by Professor Stefan Feuerriegel from LMU, is to create models that may discover causal mechanisms and not only correlations: “We give the machine rules to acknowledge the causal structure and appropriately formalize the issue.” Then the machine has to learn to acknowledge the results of interventions and, so to talk, understand how the actual consequences are reflected in the information that’s fed into the computers,” he said. It's early days for them and so they understand it, but they consider their work is a component of a very important decade-long development period.

Over on the University of Pennsylvania, graduate student Ro Encarnación is working on a brand new perspective in the sphere of “algorithmic justice”. We have seen pioneering work (mostly by women and folks of color) over the past 7-8 years. Her work focuses more on the users than the platforms and documents what she calls “emergent auditing.”

What do users do when Tiktok or Instagram publishes a somewhat racist filter or a picture generator that causes something sensational? They may complain, but they proceed to make use of it and learn to work around and even exacerbate the issues it encodes. It will not be a “solution” as we imagine, however it shows the range and resilience of the user side of the equation – it will not be as fragile or passive as you would possibly think.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read