HomeNewsThe EU's proposal to postpone parts of its AI law signals a...

The EU's proposal to postpone parts of its AI law signals a political shift that prioritizes big tech over fairness

The introduction of the EU law on artificial intelligence has reached an important turning point. The law sets rules for the usage of AI systems throughout the European Union. It officially got here into force on August 1, 2024, with different rules coming into effect at different times.

The European Commission has now suggested delay parts The validity of the law is to be prolonged until 2027. This follows intense pressure from tech firms and the Trump administration.

The rules contained within the law are based on the danger posed by an AI system. For example, a high-risk AI have to be very accurate and monitored by a human. This should apply from August 2026 or a 12 months later to firms developing high-risk AI systems that pose “serious risks to health, safety or fundamental rights”. But now organizations using these technologies, whose purposes would come with analyzing resumes or evaluating loan applications, won’t be covered by the bill's provisions until December 2027.

The proposed delay is a component of an overhaul EU digital regulations, including data protection regulations and data laws. The latest rules may benefit firms including American tech giants, with critics calling them a “rollback” of digital protections. The EU says Its “simpler” rules would “help European firms grow and stay on the forefront of technology, while promoting Europe’s highest standards of fundamental rights, data protection, security and fairness.”

The negative response to the proposals reveals transatlantic fault lines over the way to effectively manage the usage of AI. The first international speech by Vice President JD Vance in February 2024 provides a useful insight into the present US administration's stance on AI regulation.

The law incorporates specific rules for high-risk AI systems comparable to hiring algorithms.
Migma_Agency

Vance claimed that excessive regulation of the sector “could destroy a transformative industry just because it's getting off the bottom.” He also took aim at EU regulations relevant to AI, comparable to the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA). He said that for smaller firms, “compliance with GDPR means paying countless regulatory compliance costs.”

He added that the DSA puts a burden on tech firms by forcing them to remove content and monitor “so-called misinformation.” Vance further promised that the U.S. wouldn’t accept “foreign governments … tightening the screws” on American technology firms.

On the offensive

By August of this 12 months, the Trump administration had launched its own AI policy offensive. including a plan to speed up AI innovation and national AI infrastructure. It announced executive orders to streamline data infrastructure, promote the export of American AI technologies and stop what the administration sees as potential bias in federal procurement and standards for AI.

Deregulation, open source development (whereby the code for AI systems is obtainable to developers) and “neutrality” were also sought. The latter appears to mean opposition to what the White House sees as “woke” or restrictive models of presidency.

Furthermore, President Trump has criticized The EU's digital services law threatens additional tariffs in response to further fines or restrictions on US tech firms. The EU's reactions were varied. While some policymakers were reportedly shocked, others reminded US politicians that EU rules apply equally to all firms, no matter their origin.

How can this gap in AI policy be closed? In March 2025, a bunch of interdisciplinary American and German scientists – from computer science to philosophy – met on the University of North Carolina in town of Chapel Hill. Their goal was to reply a series of questions on the state of transatlantic AI governance and to know the evolving technology negotiations between the US and the EU.

The recommendations of the meeting were summarized in a policy paper. The scientists viewed the mix of US innovation strength and EU human rights protections as key to addressing the pressing challenges of designing AI systems that profit society.

The policy paper states: “Due to the interconnected nature of AI development, isolated regulatory approaches aren’t sufficient. AI systems are used worldwide and their effects are spreading across international markets and societies.”

Key challenges identified within the paper include algorithmic bias (where AI-based systems favor certain parts of society or individuals), privacy protection, and labor market disruption (including but not limited to mental property theft). Also mentioned were the concentration of technological power and the negative impact of all of the energy required on the environment.

Based on the principles of human rights and social justice, the policy paper made plenty of recommendations, starting from clear guidelines for the moral use of AI within the workplace to mechanisms to guard reliable information and stop potential pressure on academic researchers to support certain viewpoints.

Ultimately, the goal is democratic and sustainable AI that’s developed, deployed and managed in a way that preserves values ​​comparable to public participation, transparency and accountability.

To achieve this, policy and regulation must strike a fragile balance between innovation and equity. These variables aren’t mutually exclusive. For all of this to work, they should coexist. It is a task that requires shared leadership from transatlantic partners, as they’ve done for much of the last century.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read