HomeIndustriesRegulators will all the time struggle to maintain pace with AI developments

Regulators will all the time struggle to maintain pace with AI developments

Unlock Editor's Digest without spending a dime

Lawmakers world wide are grappling with artificial intelligence. The initial efforts are extensive but hardly rapid. The EU AI lawthe primary one to return out of the starting blocks is 144 pages long. Regulation is lagging miles behind innovation. The EU was forced so as to add a chapter on generative AI in the course of the method.

It is true that peripatetic technology touches on few economic, financial and social issues. It requires many guardrails.

Unlike the EU's principles-based approach to data within the General Data Protection Regulation (GDPR), the AI ​​law takes a product safety approach, just like the regulation of cars or medical devices. It goals to quantify and address risks by upholding standards and reviewing them before launch. Think of test crashes with a automobile model before its market launch.

The EU classifies capabilities and the resulting requirements in accordance with risk profile. At the highest of the pyramid is the Black Mirror Things like behavioral manipulation, social scoring – that are forbidden. At the very bottom are the standard spam filters and AI-supported games where a voluntary code is sufficient.

Of course, it's the center two tiers that can have the largest impact on technology developers and their users. Financial services firms and other firms that use AI tools for things like credit scoring or hiring fall into this category. Users also face liability in higher risk categories if they modify a model: An organization might change the way it uses AI over time, for instance from reviewing resumes to creating promotion decisions.

One likely consequence is the intensive use of contracts between AI implementers and major technology providers, says Professor Lilian Edwards of Newcastle University.

Defining what constitutes a systemic risk in generative AI is difficult. The EU – and the USA in its Implementing regulation on the usage of AI – have used computing power metrics. The EU sets its limit at 10²⁵ floating point operations per second, a measure of computing power, while the US has set it at 10²⁶. Exceeding this value will lead to additional obligations.

The problem is that this refers to the ability needed for training, which could rise and even fall once it's deployed. It's also a somewhat spurious number: there are various other influencing aspects, including data quality and reasoning, that may boost performance without requiring additional processing power for training. It may also quickly turn out to be outdated: today's high number may very well be mainstream next yr.

The EU law, which has been officially in force since August, is being phased in progressively. As capabilities develop, more obstacles will emerge. Even as the principles evolve, there may be a risk that they’ll lag behind technologically.

Video: AI energy demand could slow industry growth | FT Energy Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read