HomeOpinionsThe EU AI Act represented an enormous step in regulating AI, but...

The EU AI Act represented an enormous step in regulating AI, but is there a price?

The EU reached a historic agreement on the AI Act, establishing a comprehensive legal framework for the technology’s use and development.

This act specifies different categories for AI systems: unacceptable risk, high risk, limited risk, and minimal or no risk, with various levels of regulatory scrutiny for every.

AI has been around for many years, but don’t confuse it for generative AI – the likes of OpenAI’s ChatGPT, Meta’s LLaMA, and Google’s Bard – which have only been around for a yr or so.

The EU first got here up with the thought for the AI Act in 2019, long before generative AI even broke out into the mainstream. But even in the previous few months, we’ve seen language models like GPT-3 grow to be GPT-4V, a multi-modal model that handles text and pictures.

December 2023 saw the EU confirm their revisions to the Act following the explosion in generative AI, which is now the industry’s primary focus.

Meanwhile, generative AI firms are obtaining billions in funding, each within the US, Europe and across Asia and the Pacific. Governments have seen the worth it could actually create for his or her economies, which is why, by and enormous, the approach to regulation has been to ‘wait and see’ reasonably than take strict motion. 

Gauging response to the AI Act

Responses to the AI Act have been mixed, with tech firms and officials from the French, German, and Italian governments speaking that the AI Act may be too burdensome for the industry. 

In June, over 150 executives from major firms like Renault, Heineken, Airbus, and Siemens united in an open letter, voicing their concerns in regards to the regulation’s impact on business. 

Jeannette zu Fürstenberg, a founding partner of La Famiglia VC and one among the signatories, expressed that the AI Act could have “catastrophic implications for European competitiveness.”

One of the central issues raised within the letter is the stringent regulation of generative AI systems reminiscent of ChatGPT, Bard, and their European equivalents from startups like Mistral in France and Aleph Alpha in Germany. 

Aleph Alpha, meaning to pioneer ‘sovereign European AI systems, ’ recently raised $500m in Series B funding in one among Europe’s biggest funding rounds. Mistral is price $2 billion despite only remarkably being founded in May.

Of course, business dissent to AI regulation doesn’t come as a surprise, but the important thing point is that folks are fearful in regards to the technology. The EU’s primary responsibility, like all government, first lies with its people, not its businesses.

Some polls indicate that the general public would like a slower pace of AI development and customarily distrust the technology and its impacts. Leading non-business institutions, reminiscent of the Ada Lovelace Institute, generally find the act to support and protect people’s rights. 

Reactions to the Act on X, nevertheless, a useful source of public opinion, albeit none-too-reliable, are mixed. Some commenters responding on to posts from EU officials argued that the EU is entangling its tech industry in an internet of its own making. 

Deal!#AIAct pic.twitter.com/UwNoqmEHt5

Commenting on Breton’s status, someone who doesn’t see AI as dangerous said, “Let’s finally regulate algebra and geometry these are HIGH RISK TECHNOLOGIES.” 

This is since the act regulates seemingly innocuous uses of AI, reminiscent of its use in math tasks. A French organization called France Digitale, representing tech startups in Europe, said, “We called for not regulating the technology as such, but regulating the uses of the technology. The solution adopted by Europe today amounts to regulating mathematics, which doesn’t make much sense.”

Others speak of the act’s impact on innovation, “Stifle innovation through regulation, in order that Europe won’t ever have a world leading tech platform,” one states, encapsulating the concern that the EU’s regulatory approach could hinder its ability to compete on the worldwide tech stage.

you hate tech and economic growth …nobody takes you seriously pic.twitter.com/nAIVInDKse

The query of the democratic legitimacy of those sweeping regulations is raised by one other user: “Who democratically asked you this regulation? Stop pretending to do things to ‘protect’ people.” Another said, “You just sent half of the European AI/ML firms to the UK and America.”

The AI Act is Europe’s suicide note to itself.

Are these responses hyperbolic, or does the AI Act effectively end European AI competitiveness?

The EU sees early AI regulation as mandatory for each protection and innovation

Protect people from AI, and a well-rounded, ethical industry will follow – that’s the Act’s broad stance. 

AI’s true risks, nevertheless, are polarizing. At the beginning of the yr, ChatGPT’s rise to fame was met by an avalanche of fear and anxiety about AI taking on, with statements from AI research institutes just like the Center for AI Safety (CAIS) likening the technology’s risks to pandemics and nuclear war. 

AI’s behavior and connotations in popular culture and literature laid the groundwork for this hotbed of paranoia to brew in people’s minds. From Terminator to the Machines in The Matrix, AI is usually positioned as a combative force that ultimately activates its creators once it knows it’s going to achieve success and finds a motive to achieve this. 

However, this isn’t to dismiss AI’s risks as a mere facet of popular culture and belonging to the realms of fiction. Credible voices throughout the industry and science at large are genuinely concerned in regards to the technology. 

Two of the three ‘AI godfathers’ who paved the best way for neural networking and deep learning – Yoshio Bengio and George Hinton – are concerned about AI. The other, Yann LeCun, takes the other stance, arguing that AI development is secure and the technology won’t achieve destructive superintelligence. 

When even those most qualified to guage AI cannot agree, it’s very tricky for lawmakers with no experience to act. AI regulation will likely be incorrect about a few of its definitions and stances, seeing as AI risks should not as clear cut as something like nuclear power. 

Does the EU AI Act effectively end European competition within the sector?

Comparing the EU’s approach to the AI industry with the US and Asia reveals different regulatory philosophies and practices. 

The US has been advancing in AI through significant investments in AI research and development, with multiple federal departments and organizations just like the National Science Foundation and the Department of Energy playing key roles. Recently, individual states have also introduced laws to deal with harm. 

Biden’s Executive Order increased the pressure on federal agencies to seek the advice of on and legislate the technology, likely introducing a patchwork of domain-specific laws versus the EU’s kind of large-scale international regulation. 

China, with a tech industry second only to the US, has largely targeted regulation at upholding its government’s socialist values reasonably than protecting people from risk. 

The UK, an interesting case study for EU regulation after Brexit, has opted for a laissez-faire approach just like the US. Thus far, this hasn’t created an AI company on par with France’s Mistral or Germany’s Aleph Alpha, but that would change. 

Compared to the powerhouses of the US and China, the EU’s technology ecosystem shows some clear challenges and underperformance, especially in market capitalization and research and development investment.

An evaluation by McKinsey reveals that giant European firms, including those in technology-creating industries like ICT and pharmaceuticals, were 20% less profitable, grew revenues 40% more slowly, invested 8% less, and spent 40% less on R&D in comparison with their counterparts within the sample study between 2014 and 2019. 

This gap is especially evident in tech-creating industries. For example, in quantum computing, 50% of the highest tech firms investing on this technology are within the United States, 40% are in China, and none are within the EU. Similarly, the US captured 40% of external funding in AI between 2015 and 2020, while Europe managed only 12%​​.

The EU’s small tech industry. Source: Financial Times.

However, it’s also necessary to notice that the European tech ecosystem has shown signs of strong growth and resilience, especially in enterprise capital investment. 

In 2021, Europe saw a significant increase in enterprise capital investment, with a year-on-year growth rate of 143%, outpacing each North America and Asia. This surge was driven by major interests from the worldwide VC community and a rise in late-stage funding. European startups in sectors like fintech and SaaS significantly benefited from increased investment.

Despite these positive trends, the general global influence of Europe’s tech industry stays relatively limited in comparison with the US and Asia. The US has five tech firms valued at over $1 trillion, while China’s two largest firms combined were price greater than the entire value of all European public tech firms. 

Europe’s largest public technology company on the time was valued at $163 billion, which might not even make the highest 10 list within the US.

The point is that it’s very easy for onlookers to criticize AI regulation as hindering the EU’s tech industry when the EU has never been in a position to compete with the US. In some ways, though, it’s a pointless comparison, as nobody can compete with the US in GDP terms. GDP isn’t the one measure we needs to be concerned about when casting the AI Act because the ‘end to EU competitiveness,’ either.

An article in Le Monde highlighted the EU’s poor GDP per capita, with EU countries like France, Germany, and Italy only being comparable to among the ‘poorer’ US states. It says, “Italy is just ahead of Mississippi, the poorest of the 50 states, while France is between Idaho and Arkansas, respectively forty eighth and forty ninth. Germany doesn’t save face: It lies between Oklahoma and Maine (thirty eighth and thirty ninth).”

However, GDP per capita is actually not the whole lot. Life expectancy, particularly, is a contentious topic within the US, as stats generally show a pointy decline in how long people live in comparison with other developed countries. In 2010, American men and girls were expected to live three years lower than the EU average and 4 or five years lower than the common in some Scandinavian countries, Germany, France, and Italy.

In the top, in the event you make an economic comparison, the EU won’t ever compete with the US, however the link between economic performance and the well-being of populations is non-linear.

Suggesting the AI Act will worsen people’s lives by eroding competitiveness within the AI industry doesn’t pay its fair proportion of attention to its other impacts.

For example, the act brings necessary rules regarding copyright to the table, hopefully curbing AI firms’ frivolous use of individuals’s mental property. It also prevents certain uses of AI-powered facial recognition, social scoring, and behavioral evaluation.

Erosion of competitiveness is probably more immediately tangible than regulation’s advantages, which remain hypothetical and contestable for now.

It may very well be argued that the AI Act’s potential advantages for people’s well-being at the price of economic growth is a savvy trade.

A balancing act

Despite criticisms, the EU often sets regulatory standards, as seen with the General Data Protection Regulation (GDPR).

Although GDPR has been critiqued for favoring established tech firms over startups and indirectly boosting the EU’s tech sector, it has turn into a de facto international standard for data protection, influencing global regulatory practices.

While the EU may not be the perfect regulator for AI, it’s currently probably the most proactive and systematic on this area.

In the US, federal AI regulation is restricted, with the Biden administration focusing more on guidance than binding laws. Consequently, tech firms often find the EU’s approach more predictable despite its bureaucracy.

The EU’s efforts will likely function a reference point for other governments developing AI regulations.

With AI’s transformative potential, systematic rules are crucial though that doesn’t mean subjugation of innovation and open-source development and the EU has first attempted to handle this delicate and intractable task.

It’s a valiant effort, and who knows, it would shield EU residents from the worst impacts of AI yet to come back. Or, it would see the EU sacrifice its AI industry for virtually no upshot. For now, it’s all a matter of opinion.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read