The European Union’s Artificial Intelligence (AI) law officially got here into force on August 1, 2024 – a turning point for global AI regulation.
This comprehensive laws categorizes AI systems based on their risk level and prescribes different levels of oversight depending on the danger category.
The law will completely ban some AI applications that pose “unacceptable risks,” corresponding to those geared toward manipulating human behavior.
While act Although the law is now in force in all 27 EU member states, most of its provisions is not going to come into force immediately.
Rather, this date marks the start of a preparation phase for each regulators and corporations.
Nevertheless, things are moving forward and the law will definitely shape the longer term of the event, deployment and management of AI technologies each within the EU and internationally.
The implementation schedule is as follows:
- February 2025: Bans on AI practices that pose “unacceptable risks” come into force. These include social scoring systems, untargeted scraping of facial images and using emotion recognition technology within the workplace and academic institutions.
- August 2025: Requirements for general AI models come into force. This category, which incorporates large language models corresponding to GPT, must comply with rules on transparency, security and risk mitigation.
- August 2026: Regulations for high-risk AI systems in critical sectors corresponding to healthcare, education and employment develop into mandatory.
The European Commission is preparing to implement these latest rules.
Commission spokesman Thomas Regnier said that around 60 of the prevailing staff shall be transferred to the brand new AI office and one other 80 external candidates shall be hired next yr.
In addition, each EU member state must establish competent national authorities to watch and implement the law by August 2025.
Compliance is not going to occur overnight. While every major AI company has been preparing for the law for a while, experts estimate that implementing the controls and practices could take as much as six months.
The stakes are high for firms targeted by the law. Companies that violate the law face fines of as much as €35 million or 7 percent of their annual global turnover (whichever is higher).
This is higher than the GDPR, and the EU isn’t inclined to make empty threats and over 4 billion euros from the previous GDPR fines.
International impact
As the world's first comprehensive AI regulation, the EU AI Act will set latest standards.
Big players like Microsoft, Google, Amazon, Apple and Meta shall be most affected by the brand new regulations.
As Charlie Thompson of Appian to CNBC“The AI law will likely apply to all organizations operating or having influence within the EU, no matter where they’re headquartered.”
Some US firms are taking preventive measures. Meta, for instance, has reduced the provision of its AI model LLaMa 400B in Europeand referred to regulatory uncertainty.
To comply with the regulations, AI firms might have to revise their training datasets, implement stricter human oversight and supply detailed documentation to EU authorities.
This contradicts the best way the AI industry works. The proprietary models of OpenAI, Google, etc. are tightly protected. Training data is amazingly beneficial and releasing it could likely expose large amounts of copyrighted material.
Some firms are under pressure to act ahead of others
The EU Commission estimates that around 85 percent of AI firms fall into the “minimal risk” category and due to this fact require little supervision. Nevertheless, the provisions of the law affect the activities of firms in the upper categories.
Human resources and employment is an area that falls into the “high risk” category under the law.
Major enterprise software providers corresponding to SAP, Oracle, IBM, Workday and ServiceNow have all launched AI-powered HR applications that integrate AI into candidate selection and management.
Jesper Schleimann, SAP's AI representative for EMEA, told The Register that the corporate has established robust processes to make sure compliance with the brand new rules.
Similarly, Workday has implemented a Responsible AI program under the leadership of senior executives to fulfill the necessities of the law.
Another bottom-line category is AI systems utilized in critical infrastructure and essential private and non-private services.
This covers a big selection of applications, from AI use in energy networks and transport systems to applications in healthcare and financial services.
Companies operating in these sectors must display that their AI systems meet strict safety and reliability standards. They must also conduct thorough risk assessments, implement robust monitoring systems, and make sure that their AI models are explainable and transparent.
While the AI Act generally prohibits certain uses of biometric identification and surveillance, it allows limited exceptions within the areas of law enforcement and national security.
This has proven to be a fertile area for AI development, with firms like Palantir advanced crime prediction systems that are prone to contradict the law.
Britain has already experimented intensively with AI-assisted surveillanceAlthough the UK isn’t a member of the EU, it is predicted to voluntarily adopt a number of the recommendations and controls.
There is uncertainty ahead
Reactions to the law were mixed, with many firms within the EU technology sector expressing concerns concerning the impact on innovation and competition.
In June, over 150 managers Representatives of major firms corresponding to Renault, Heineken, Airbus and Siemens have joined forces in an open letter expressing their concerns concerning the impact of the regulation on the economy.
Jeannette zu Fürstenberg, founding partner of La Famiglia VC and certainly one of the signatories, said the AI law could have “catastrophic effects on European competitiveness.”
France Digitale, which represents tech startups in Europe, criticized the law's definitions, saying: “We weren’t asking for regulation of technology as such, but regulation of its use. The solution Europe has chosen today amounts to regulating mathematics, which doesn't make much sense.”
However, the law also offers opportunities for innovation within the responsible development of AI. The EU's position is evident: should you protect people from AI, a comprehensive, ethical industry will follow. They hope that this can Developing latest tools and services for AI governance, transparency and risk mitigation.
Regnier told Euro News: “Everywhere you hear that the EU is simply regulating (…) and that this can block innovation. That isn’t true.”
“The laws isn’t there to stop firms from implementing their systems – quite the opposite. We want them to operate within the EU, but we would like to guard our residents and our firms.”
There is reason for optimism.Setting limits on AI-powered facial recognition, social scoring and behavioral evaluation is meant to guard the civil liberties of EU residents, which have long taken precedence over technology in EU rules.
At the international level, the law may help construct public trust in AI technologies by setting clear standards for the event and use of AI.
Building long-term trust in AI is critical to the industry moving forward, so there might be some business value to be gained from it, but it can require patience to see it bear fruit.