HomeIndustriesWe need a Food and Drug Administration for AI

We need a Food and Drug Administration for AI

Stay up to this point with free updates

While medicines saved tens of millions of lives, hundreds died within the nineteenth century from taking unsafe drugs sold by charlatans. In the United States and Europe, this led to the gradual implementation of food and drug safety laws and institutions – including the U.S. Food and Drug Administration – to be certain that the advantages outweigh the harms.

The rise of large-scale artificial intelligence language models like GPT-4 is giving recent impetus to the industry, making all the pieces from scientific innovation to education to film production easier and more efficient. But alongside the large advantages, these technologies also can pose serious risks to national security.

We wouldn’t allow a brand new drug to be sold without thorough testing for safety and efficacy. Why should AI be any different? The creation of a “Food and Drug Administration for AI” is probably a crude metaphor, since the Institute for Artificial Intelligence wrote, however it is time for governments to mandate safety testing for AI.

The UK government under former Prime Minister Rishi Sunak deserves real credit here: only a yr after Sunak took office, the UK held the groundbreaking Bletchley Park AI Safety Summit, established a comparatively well-funded AI Safety Institute, and audited five leading major language models.

The US and other countries akin to Singapore, Canada and Japan are emulating the UK's approach, but these efforts are still of their infancy. OpenAI and Anthropic are voluntarily allowing the US and UK to check their models, and deserve credit for doing so.

Now it's time to go a step further. The most glaring gap in our current approach to AI safety is the shortage of mandatory, independent and rigorous testing to stop AI from causing harm. Such testing should only apply to the most important models and be mandatory before they’re unleashed on the general public.

While drug testing can take years, the AI ​​Safety Institute's technical teams were in a position to conduct narrowly focused testing in a matter of weeks, so safety wouldn’t significantly slow innovation.

In particular, the tests should examine the extent to which the model could cause tangible, physical harm, akin to whether it could help create biological or chemical weapons or undermine cyber defenses. It can also be necessary to evaluate whether the model is difficult for humans to regulate and might train itself to interrupt out of the security measures designed to constrain it. Some of this has already happened – in February 2024, it was discovered that hackers working for China, Russia, North Korea and Iran had used OpenAI's technology to perform novel cyberattacks.

Although ethical AI and bias are also critical issues, there may be more disagreement inside society about what constitutes such bias. Therefore, testing should initially give attention to national security and physical harm to humans as the largest threat posed by AI. For example, imagine a terrorist group using AI-controlled self-driving vehicles to deliberately detonate explosive devices – a fear that NATO has also expressed.

Once they pass these initial tests, AI firms needs to be required – much like those within the pharmaceutical industry – to closely and consistently monitor potential misuse of their models and report misuse immediately. This can also be common practice within the pharmaceutical industry and ensures that potentially harmful drugs are faraway from the market.

In return for such monitoring and testing, cooperating firms needs to be given a “secure harbor” that protects them from some legal liability. Both the US and UK legal systems have existing laws that balance the hazards and advantages of products akin to engines, cars, medicines and other technologies. For example, airlines which have otherwise complied with safety regulations are generally not chargeable for the implications of unforeseeable natural disasters.

If AI developers refuse to comply with regulations, they face penalties – similar to pharmaceutical firms that withhold their data from regulators.

California is paving the way in which here: Last month, the state passed a bill—currently awaiting approval from Governor Gavin Newsom—that may require AI developers to create safety protocols to mitigate “critical harms.” While the move isn’t too onerous, it’s a step in the suitable direction.

For many years, rigorous reporting and testing regulations within the pharmaceutical sector have enabled the responsible development of medicines that help, fairly than harm, the population. While the UK's AI Safety Institute and other institutes represent a vital first step, to totally realize the advantages of AI, we must take concrete motion immediately to create and implement safety standards – before models cause harm in the actual world.

Video: Content creators take up the fight against AI | FT Tech

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read