HomeEthics & SocietyOpenAI outlines plans for responsible AI data usage and creator partnerships 

OpenAI outlines plans for responsible AI data usage and creator partnerships 

OpenAI recently announced a brand new approach to data and AI, emphasizing the importance of responsible AI development and partnerships with creators and content owners. 

The company has declared to construct AI systems that expand opportunities for everybody while respecting the alternatives of creators and publishers.

“AI should expand opportunities for everybody. By transforming information in latest ways, AI systems help us solve problems and express ourselves,” OpenAI stated in its recent blog post

As a part of this strategy, the corporate is developing a tool called Media Manager, poised to enable creators and content owners to specify how they need their works to be included or excluded from machine learning research and training. 

“Our goal is to have the tool in place by 2025, and we hope it is going to set an ordinary across the AI industry,” OpenAI stated.

There’s little information available about Media Manager and the way it’d work. It looks like it’ll take the shape of a self-service tool where creators can discover and control their data.

Some speculate whether OpenAI will actively discover creators’ data inside their dataset using machine learning – which may very well be huge.

Ultimately, we don’t yet know the way it’ll work or how effective it is going to be. 

OpenAI announced Media Manager, a planned platform to let creators opt in/out of generative AI training.

– I’m pleased they’re engaging with this issue
– They acknowledge that existing opt-outs aren’t adequate
– When you choose out, it feels like they’ll use ML to…

A positive move from OpenAI? Possibly, but when OpenAI genuinely believes that training AI models on publicly available data falls under fair use, there could be no need for an opt-out option. 

Moreover, if OpenAI can develop tools to discover copyrighted material, it could probably use them to filter its data scraping from the outset fairly than requiring content creators to opt out.

Plus, 2025 gives them enough time to construct a colossal foundational dataset of individuals’s copyrighted works without their permission. 

From there, it’s primarily a matter of fine-tuning. OpenAI will proceed to buy data from sources like the Financial Times and Le Monde to maintain their models up-to-date. 

This does, a minimum of, function evidence that there’s pressure on OpenAI and other AI corporations to handle data more ethically. 

Contributing to a desk filled with lawsuits, European privacy advocacy group Noyb recently launched legal motion at OpenAI, claiming that ChatGPT repeatedly generates inaccurate details about people and fails to correct it. 

OpenAI‘s response was a characteristic: ‘You is perhaps right, but we will’t, or won’t, do anything about it.’


Please enter your comment!
Please enter your name here

Must Read