HomeNewsNIST launches a brand new platform for evaluating generative AI

NIST launches a brand new platform for evaluating generative AI

The National Institute of Standards and Technology (NIST), the U.S. Department of Commerce agency that develops and tests technologies for the U.S. government, businesses and most people, announced Monday the launch of NIST GenAI, a brand new leadership program of NIST to guage generative technologies AI technologies including text and image generating AI.

NIST GenAI will publish benchmarks, help develop “content authenticity” detection systems (i.e. deepfake checking), and encourage the event of software to detect the source of faux or misleading AI-generated information, NIST says newly launched NIST GenAI website and in a single Press release.

“The NIST GenAI program will (intend to) issue a series of challenge tasks to evaluate and measure the capabilities and limitations of generative AI technologies,” the press release said. “These assessments are used to discover strategies to advertise information integrity and guide the protected and responsible use of digital content.”

NIST GenAI's first project is a pilot study to construct systems that may reliably tell the difference between human-generated and AI-generated media, starting with text. (While many services claim to detect deepfakes, studies and our own testing have shown that they’re shaky at best, especially when text is involved.) NIST GenAI invites teams from academia, industry, and research labs to submit either “generators” – AI systems for generating content – ​​or “discriminators”, that are systems for identifying AI-generated content.

Generators within the study must generate summaries of 250 words or less given a subject and a set of documents, while discriminators must detect whether a given summary can have been written by AI. To ensure fairness, NIST will provide GenAI with the info needed to check the generators. Systems that depend on publicly available data and “don’t comply with applicable laws and regulations” won’t be accepted, NIST says.

Registration for the pilot begins May 1, with the primary round of two scheduled to finish August 2. The final results of the study are expected to be published in February 2025.

NIST GenAI's launch and deepfake-focused study comes at a time when the quantity of AI-generated misinformation and disinformation is increasing exponentially.

According to Clarity, a deepfake detection company, 900% more deepfakes were created and published this 12 months than in the identical period last 12 months. It understandably sets off alarms. A youngest Opinion poll from YouGov found this out 85% of Americans were concerned about misleading deepfakes spreading online.

The launch of NIST GenAI is a component of NIST's response to President Joe Biden's executive order on AI, which established rules requiring AI firms to be more transparent about how their models work and established numerous latest standards, including for Labeling AI-generated content.

It can be NIST's first AI-related announcement following the appointment of Paul Christiano, a former OpenAI researcher, to the agency's AI Safety Institute.

Christiano was a controversial alternative due to his “Doomerist” views; him once predicted that “there may be a 50 percent likelihood that AI development may lead (to the destruction of humanity).” criticismwhich reportedly includes scientists at NIST, fear that Cristiano could encourage the AI ​​Safety Institute to concentrate on “fantasy scenarios” quite than realistic, more immediate risks posed by AI.

NIST says NIST GenAI will inform the work of the AI ​​Safety Institute.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read