Stay up thus far with free updates
Simply register for Artificial intelligence myFT Digest – delivered straight to your inbox.
Artificial intelligence has brought an adrenaline rush to the US tech sector and a thrill to the world. Every day, we interact with tools that will have appeared like science fiction just a number of years ago. Generative AI can do every little thing from providing your child with a private tutor to developing novel drugs.
Unfortunately, all of that is in danger resulting from a brand new bill in California called SB-1047, which threatens to stifle AI development.
If passed, the bill wouldn’t only discourage investment in AI, but additionally the entrepreneurial spirit that drives technological progress around the globe.
In the US, over 600 AI bills have been introduced in state legislatures this yr, but SB-1047 goes further than most. It requires developers to prove that their AI models can’t be used to cause harm. This is an inconceivable demand.
AI models may be infinitely modified. Open source models allow the general public to access and develop their source code, which suggests developers may be held chargeable for third parties. There isn’t any guarantee that no version of an AI model can ever cause harm. The bill is predicated on a fundamental misunderstanding of the technology.
The bill also claims to only goal large technology corporations. However, it sets a $100 million threshold for “training costs” to find out the dimensions of an organization. AI development costs run into the billions, meaning this relatively low threshold could affect startups. There can be no clear definition of coaching costs. This is very problematic because we’re still within the early research phase of AI, where terms like “pre-training” and “post-training” shouldn’t have universal definitions.
The California State Senate has already passed a version of the bill. In August, this ill-conceived and deeply disruptive proposal could land on the desk of Governor Gavin Newsom, who could sign it and sign it into law.
The AI community has tried to lift the alarm. More than 100 leading figures within the AI scene have signed an open letter against the draft law.
Yann LeCun, Meta’s chief AI scientist, has warned that the bill “Cascading liability clauses would make it very dangerous to supply AI platforms as open source… Meta might be wonderful, but AI startups will simply die.”
As an AI investor, I actually have already witnessed the negative impact of the law first hand. Promising open source startups are considering relocating abroad. We risk a brain drain as the most effective talent moves to countries with more freedom.
The global implications are severe. While China's politicians are working closely with AI researchers, US lawmakers, nevertheless good their intentions, are writing laws without serious engagement with AI experts, investors or researchers. It's like writing medical regulations without consulting doctors.
The consequences may very well be severe and far-reaching. If AI development stalls within the US, global competitors will gain the upper hand and breakthrough ideas might be nipped within the bud before they will develop.
We need sensible AI regulation. We should give attention to the misuse of AI models by imposing stricter penalties for AI-powered crimes like non-consensual deep fakes. We could set industry-wide standards for transparency regarding the training of huge AI models and fund security research at public universities.
These measures would promote responsible development while maintaining the flexibleness needed for breakthroughs.
California's well-intentioned but misguided anti-AI bill threatens to wreck the U.S. tech industry at a time when the longer term of this technology is at a crossroads. Our policymakers must recognize that now’s the time for informed, collective motion on AI regulation.