Stay up up to now with free updates
Simply log in Artificial intelligence myFT Digest – delivered straight to your inbox.
The UK will introduce laws to guard against the risks of artificial intelligence next yr, Technology Minister Peter Kyle said, pledging to take a position in infrastructure that can support the sector's growth.
Kyle told the Financial Times: Future of the AI Summit on Wednesday that the UK's voluntary agreement on AI testing “works, it's good code” but that the long-awaited AI bill would give attention to making such agreements with leading developers legally binding.
The laws, which Kyle said will probably be put before MPs in the present Parliament, may even transform the UK's AI Safety Institute right into a standalone government agency, giving it “the independence to act fully within the interests of British residents.” Currently, the body is a directorate of the Ministry of Science, Innovation and Technology.
At the UK-organized AI Security Summit last November, corporations including OpenAI, Google DeepMind and Anthropic signed a “groundbreaking” but non-binding agreement that can allow partner governments to check their upcoming large language models for risks and vulnerabilities before releasing them to consumers be released.
Kyle said that while he was “not fatalistic” about advances in AI, “residents must know that we’re mitigating the potential risks.”
The laws will focus exclusively on ChatGPT-style “frontier” models: essentially the most advanced systems, developed by only a small group of corporations, able to generating text, images and videos.
Kyle also pledged to take a position within the advanced computing power the UK needs to coach its own sovereign AI models and LLMs after ministers got here under fire in August for blocking funding for an “exascale” supercomputing project on the University of Edinburgh had been deleted. The previous Conservative government had promised him £800 million.
Exascale supercomputing – defined as the flexibility to perform one billion billion operations per second – is widely seen as a critical step in enabling the widespread adoption of AI.
There are two known fully functional exascale computers on the planet, each within the United States. Experts consider that China also has at the least one, even though it has not taken part in international rankings of computing capability.
Kyle said the choice to scrap the prevailing Edinburgh exascale project was a “painful” consequence of Labor’s fiscal legacy from the Tories.
“I didn’t cut anything because you’ll be able to’t cut something that doesn’t exist,” he said of the previous government’s failure to allocate money for this system despite promising to achieve this.
Even if the federal government alone wouldn’t find a way to boost £100 billion to take a position in computing infrastructure, it could work with private corporations and investors to “unlock that sort of money in the longer term”, he said.
Kyle also identified that the commitments undertaken by the previous government had not adequately met the needs of the LLM sector today, saying: “If we had planned this two years ago, we might have done it mistaken.”
“I will probably be making statements specifically about computing power, regarding the high computing capability, but additionally to the final computing capability needed across the economy and society for researchers and businesses alike,” he said. “But after I make an announcement. . . It’s funded, it’s costed and it’s delivered.”
Separately, Sarah Cardell, chief executive of the Competition and Markets Authority, said the UK could grow to be a frontrunner in AI innovation and that the competition regulator's “unique” approach to digital regulation through its latest Digital Markets Unit can be a “very targeted, 'proportionate' approach”. Approach to Big Tech.
The CMA's regulatory proposal will “not deter or discourage investment”, Cardell told the FT summit. “There is a big opportunity for this to be a growth platform for the UK tech sector.”