HomeIndustriesUS homeland security chief attacks EU efforts to observe artificial intelligence

US homeland security chief attacks EU efforts to observe artificial intelligence

Stay up so far with free updates

The outgoing head of the US Department of Homeland Security believes Europe's “adversarial” relationship with tech firms is hampering a world approach to regulating artificial intelligence, which could lead on to security vulnerabilities.

Alejandro Mayorkas told the Financial Times that the US – home to the world's leading AI firms, including OpenAI and Google – and Europe should not on a “strong footing” on account of different regulatory approaches.

He stressed the necessity for “harmonization across the Atlantic” and expressed concern that relations between governments and the tech industry in Europe are “more contentious” than within the US.

“Different control of a single element creates a possible for clutter, and clutter creates a vulnerability from a security perspective,” Mayorkas said, adding that firms even have difficulty navigating the various regulations of various jurisdictions.

The warning comes after the EU enacted its AI law this yr, considered the strictest law regulating the emerging technology on this planet. It introduces restrictions on “high risk” AI systems and rules intended to supply more transparency about how AI groups use data.

The UK government can also be planning to introduce laws that will require AI firms to supply access to their models for safety assessments.

In the US, President-elect Donald Trump has vowed to repeal his predecessor Joe Biden's executive order on AI, which established a security institute to conduct voluntary tests on models.

Mayorkas said he didn't know whether the U.S. security establishment would “stay put” under the brand new administration, but warned that prescriptive laws could “stifle and harm” U.S. leadership within the rapidly evolving sector.

Mayorkas' comments highlight the fissures between European and American approaches to AI oversight as policymakers attempt to balance innovation with security concerns. DHS is tasked with protecting the safety of the United States from threats corresponding to terrorism and cybersecurity.

That responsibility falls to Kristi Noem, the governor of South Dakota, whom Trump selected to steer the department. The president-elect has also named enterprise capitalist David Sacks, a critic of technology regulation, as his AI and crypto czar.

In the US, efforts to control the technology have been thwarted by fears it could stifle innovation. In September, California Gov. Gavin Newsom vetoed an AI safety bill that will have regulated the technology within the state, citing such concerns.

The Biden administration's early approach to AI regulation has been criticized for being each too heavy-handed and never going far enough.

Silicon Valley enterprise capitalist Marc Andreessen said in a podcast interview this week that he was “very afraid” of administration officials' plans for AI policy after meetings with Biden's team this summer. He described the officers as “out for blood.”

Republican Senator Ted Cruz also recently warned of “persistent” foreign regulatory influence on the sector from policymakers in Europe and the UK.

Mayorkas said: “I'm concerned about hasty laws on the expense of innovation and ingenuity, because God knows our regulatory apparatus is just not flexible and our legislative apparatus is just not flexible.”

He defended his department's preference for “descriptive” quite than “prescriptive” guidelines. “The mandatory structure is dangerous in a rapidly evolving world.”

DHS has actively integrated AI into its operations to show that government agencies can implement latest technologies while ensuring secure operations.

Generative AI models were used to coach refugee officials and role-play interviews. An internal DHS AI chatbot powered by OpenAI via Microsoft's Azure cloud computing platform was launched this week.

During his term, Mayorkas created a framework for the secure use of AI in critical infrastructure and provided recommendations for cloud and computing providers, AI developers, infrastructure owners and operators to handle risks. These included ensuring the physical security of knowledge centers, powering AI systems and monitoring activity, assessing models for risks, biases and vulnerabilities, and protecting consumer data.

“We must work well with the private sector,” he added. “You are a very important player in our country’s critical infrastructure. The majority of it is definitely owned and operated by the private sector. We must implement a model of partnership and never one among adversity or tension.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read