HomeIndustriesBills in California might be catastrophic for AI development

Bills in California might be catastrophic for AI development

Several bills to manage artificial intelligence are being considered in California. When SB 1047 and AB 3211 grow to be law next month, they’re expected to have a big impact on AI developers.

Discussions in regards to the risks of AI and how one can contain them proceed, and within the absence of federal laws within the US, California has drafted laws that would function a precedent for other states.

SB 1047 and AB 3211 each passed the California State Assembly and will take effect next month if the Senate votes on them.

Supporters of the bills say they supply long-overdue protections against the risks of unsafe AI models and AI-generated synthetic content. Critics say the necessities mandated within the bills will stall AI development within the state because the necessities are unworkable.

Here's a fast have a look at the 2 bills and why they might prompt California AI developers to contemplate organising shop in other states.

SB1047

The SB 1047 “Safe Innovations for Breakthrough Artificial Intelligence Models Act” applies to manufacturers of models whose training costs $100 million or more.

These developers will probably be required to perform additional security checks and will probably be liable if their models are used to cause “critical damage.”

The bill defines “critical damage” as “the production or use of a chemical, biological, radiological, or nuclear weapon in a way that ends in mass casualties…or causes a minimum of five hundred million dollars ($500,000,000) in damage through cyberattacks on critical infrastructure.”

This may sound like a superb idea to make sure that developers of advanced AI models exercise sufficient care, but is it practical?

A automotive manufacturer ought to be held liable if its vehicle is unsafe, but should it even be held liable if someone intentionally drives its vehicle into one other person?

Meta's AI chief scientist Yann LeCun disagrees. LeCun is critical of the bill, saying, “Regulators should regulate applications, not technology… Holding technology developers responsible for misuse of products based on their technology will simply stop technology development.”

The bill also proposes that model creators be held liable if someone creates a derivative of their model and causes harm. This provision would spell the tip of open-weight models, which, sarcastically, were approved by the Department of Commerce.

SB 1047 passed the California Senate in May by a vote of 32 to 1 and can grow to be law if passed by the Senate in August.

AB 3211

The California Provenance, Authenticity and Watermark Standards Act (AB 3211) goals to make sure transparency and accountability within the creation and distribution of synthetic content.

If something was generated by AI, the law requires it to be labeled or have a digital watermark to make this clear.

The watermark have to be “maximally indelible… and designed to be as difficult as possible to remove.” The industry has adopted the C2PA origin standard, but it surely is trivial to remove the metadata from the content.

The bill requires that any conversational AI system “obtain the user’s explicit consent before starting the conversation.”

So when you use ChatGPT or Siri, the conversation would have to begin with “I'm an AI. Do you understand and are you OK with this?” Every time.

Another requirement is that developers of generative AI models must maintain a register of AI-generated content created by their models, to the extent that it might be interpreted as being created by humans.

For open model developers, this might probably be inconceivable. If someone downloads the model and runs it locally, how will you monitor what they generate?

The bill applies to any model distributed in California, no matter size or manufacturer. If it passes the Senate next month, AI models that don't comply won’t be allowed to be distributed and even hosted in California. Sorry, HuggingFace and GitHub.

If you violate the law, you face fines of “$1 million or 5% of the violator’s annual worldwide turnover, whichever is larger.” For corporations like Meta or Google, that’s billions of dollars.

AI regulation is starting to shift Silicon Valley's political support toward the “Make America First in AI” approach of Trump allies. If SB 1047 and AB 3211 take effect next month, it could trigger an exodus of AI developers to less-regulated states.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read