HomeEthics & SocietyResearchers join open letter advocating for independent AI evaluations

Researchers join open letter advocating for independent AI evaluations

Over 100 leading AI experts issued an open letter demanding that corporations behind generative AI technologies, like OpenAI, Meta, and others, open their doors to independent testing. 

Their message is obvious: AI developers’ terms and conditions are curbing independent research efforts into AI tool safety. 

Co-signees feature leading experts similar to Stanford’s Percy Liang, Pulitzer Prize-winner Julia Angwin, Stanford Internet Observatory’s Renée DiResta, Mozilla Fellow Deb Raji, ex-European Parliament member Marietje Schaake, and Suresh Venkatasubramanian from Brown University.

Researchers argue that the teachings from the social media era, when independent research was often marginalized, shouldn’t be repeated.

To combat this risk, they ask that OpenAI, Meta, Anthropic, Google, Midjourney, and others create a legal and technical secure space for researchers to judge AI products without fearing being sued or banned.

The letter says, “While corporations’ terms of service deter malicious use, additionally they offer no exemption for independent good faith research, leaving researchers prone to account suspension and even legal reprisal.”

AI tools impose strict usage policies to forestall them from being manipulated into bypassing their guardrails. For example, OpenAI recently branded investigative efforts by the New York Times as “hacking,” and Meta threatened to withdraw licenses over mental property disputes. 

A recent study probed MidJourney to disclose quite a few instances of copyright violation, which might have been against the corporate’s T&Cs.

The problem is that since AI tools are largely unpredictable under the hood, they rely on people using them in a particular approach to remain ‘secure.’ 

However, those self same policies make it tough for researchers to probe and understand models. 

The letter, published on MIT’s website, makes two pleas:

1. “First, a legal secure harbor would indemnify good faith independent AI safety, security, and trustworthiness research, provided it’s conducted in accordance with well-established vulnerability disclosure rules.”

2. “Second, corporations should commit to more equitable access, through the use of independent reviewers to moderate researchers’ evaluation applications, which might protect rule-abiding safety research from counterproductive account suspensions, and mitigate the priority of corporations choosing their very own evaluators.”

The letter also introduces a policy proposal, co-drafted by some signatories, which suggests modifications in the businesses’ terms of service to accommodate academic and safety research.

This contributes to broadening consensus in regards to the risks related to generative AI, including bias, copyright infringement, and the creation of non-consensual intimate imagery. 

By advocating for a “secure harbor” for independent evaluation, these experts are championing the explanation for public interest, aiming to create an ecosystem where AI technologies might be developed and deployed responsibly, with the well-being of society on the forefront.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read