HomePolicyIs regulating hardware the reply to AI security? These experts consider...

Is regulating hardware the reply to AI security? These experts consider it

Experts suggest that essentially the most effective solution to ensure AI safety could also be to control its “hardware” – the chips and data centers, or “computers,” that power AI technologies.

The report, a collaboration of distinguished institutions including the Center of AI Safety (CAIS), the University of Cambridge's Leverhulme Center for the Future of Intelligence and OpenAI, proposes a world registry to trace AI chips and sets “computing caps” which needs to be adhered to R&D distributed across different nations and firms.

This novel, hardware-centric approach could possibly be effective due to physical nature of chips and data centers, making them more easily regulated than intangible data and algorithms.

Haydn Belfield, co-lead creator from the University of Cambridge, explains the role of computing power in AI research and development, saying: “AI supercomputers consist of tens of 1000’s of networked AI chips… and eat tens of megawatts of electricity.”

The reportwith a complete of 19 authors, including “AI godfather” Yoshio Bengio, highlights the colossal growth in computing power required by AI, noting that the biggest models today require 350 million times more computing power than they did thirteen years ago.

The authors argue that the exponential increase in demand for AI hardware presents a possibility to stop centralization and AI from spiraling uncontrolled. Given the insane power consumption of some data centers, this might also reduce AI's increasing influence on energy networks.

Drawing parallels to nuclear regulation that others, including OpenAI CEO Sam Altman, used for instance for the regulation of AIThe report proposes policies to enhance the worldwide visibility of AI computing, provide computing resources for the advantage of society, and implement limits on computing power to mitigate risks.

Professor Diane Coyle, one other co-author, points to the advantages of hardware monitoring for maintaining a competitive market, saying: “Hardware monitoring would greatly help competition authorities keep a check in the marketplace power of the biggest technology firms “To open up market power.” Room for more innovation and recent market participants.

Belfield sums up the important thing message of the report: “Trying to manage AI models while they’re in use could prove futile, like chasing shadows.” Anyone who wants to ascertain AI regulation should look upstream of computing power “The source of the facility that drives the AI ​​revolution.”

Multilateral agreements like this require global cooperation, which within the case of nuclear energy got here about through large-scale disasters.

A series of incidents led to the creation of the International Atomic Energy Agency (IAEA) in 1957. There were some problems up until Chernobyl.

Today, the design, approval and construction of a nuclear reactor can take ten years or more because the method is closely monitored in any respect times. Every part is being scrutinized closely because nations collectively understand the risks, each individually and collectively.

Might we also need a significant catastrophe to make AI's sense of security a reality?

As for hardware regulation, who will lead a government that limits chip supply? Who will dictate the agreement and may it’s enforced?

And how do you prevent those with the strongest supply chains from benefiting from their competitors' restrictions?

What about Russia, China and the Middle East?

It's easy to limit chip supplies while China relies on U.S. manufacturers like Nvidia, however it won't last endlessly. China desires to be self-sufficient in AI hardware this decade.

The 100+ page report provides some clues, and this appears to be an avenue value exploring, nevertheless Implementing such a plan requires greater than convincing arguments.


Please enter your comment!
Please enter your name here

Must Read