HomeIndustriesOpenAI acknowledges that latest models increase the chance of misuse to supply...

OpenAI acknowledges that latest models increase the chance of misuse to supply bioweapons

Unlock Editor's Digest free of charge

OpenAI's latest models have “significantly” increased the chance that artificial intelligence might be misused to create biological weapons, the corporate admitted.

The San Francisco-based group unveiled its latest models, often called o1, on Thursday, touting their latest abilities to reason, solve difficult math problems and answer scientific research questions. These advances are seen as a critical breakthrough in the event of artificial general intelligence – machines with human-like cognitive powers.

OpenAI's system map, a tool for explaining how AI works, said the brand new models had a “medium risk” for problems related to chemical, biological, radiological and nuclear (CBRN) weapons – the very best risk OpenAI has ever given for its models. The company said this meant the technology had “significantly improved” the power of experts to develop bioweapons.

According to experts, AI software with more advanced capabilities, similar to the power to think step-by-step, poses an increased risk of misuse by malicious actors.

Yoshua Bengio, a professor of computer science on the University of Montreal and certainly one of the world's leading AI researchers, said that if OpenAI now poses a “medium risk” for chemical and biological weapons, “this only underscores the importance and urgency” of laws similar to a hotly debated bill in California to control the sector.

The measure – often called SB 1047 – would require manufacturers of the most costly models to take steps to reduce the chance that their models might be used to develop bioweapons. As “breakthrough” AI models move toward AGI, “the risks will proceed to extend if the correct safeguards are lacking,” Bengio said. “Enhancing AI's ability to reason and use that ability to deceive is especially dangerous.”

These warnings come at a time when technology corporations like Google, Meta and Anthropic are racing to develop and improve sophisticated AI systems, aiming to create software that may act as “agents” to assist people complete tasks and manage their lives.

These AI agents are also seen as potential money-makers for corporations combating the large costs of coaching and running latest models.

Mira Murati, OpenAI's chief technology officer, told the Financial Times that the corporate is taking a very “cautious” approach to launching o1 due to its advanced features, although the product might be widely available to programmers through ChatGPT's paying subscribers and via an API.

She added that the model had been tested by so-called red teamers – experts from different scientific fields who’ve tried to crack the model – to explore its limits. Murati said the present models have performed much better on overall security metrics than previous ones.

OpenAI stated that the preview model “might be safely deployed under (its own guidelines) and is rated 'medium risk' on (its) cautious scale because it doesn’t allow for increased risks beyond what’s already possible with existing resources.”

Video: AI: blessing or curse for humanity? | FT Tech

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read