HomeIndustriesUS Department of Commerce policy supports “open” AI models

US Department of Commerce policy supports “open” AI models

The U.S. Department of Commerce has issued a policy advocating open models, reflecting the Biden-Harris administration's views on this controversial issue.

Closed model advocates similar to OpenAI and Google emphasize the risks of open models, while others similar to Meta advocate for variants of open source AI.

The DOC report The report, written by the National Telecommunications and Information Administration (NTIA) department, advocates “openness in artificial intelligence (AI) while calling for lively monitoring of risks in high-performance AI models.”

The arguments against publishing models generally point to the risks of dual use, that are that malicious actors can bend the models to their nefarious will. The directive acknowledges the danger of dual use, but says that the advantages outweigh the risks.

“The Biden-Harris administration is pulling out all of the stops to maximise the potential of AI while minimizing its risks,” said U.S. Secretary of Commerce Gina Raimondo.

“Today’s report provides a roadmap for responsible AI innovation and American leadership by welcoming openness and making recommendations on how the U.S. government can prepare for and adapt to potential future challenges.”

The report says that while the U.S. government should actively monitor for potential emerging risks, it mustn’t restrict the provision of open models.

Open weights open doors

Anyone can download Meta's latest Llama 3.1 405B model and its weights. Although the training data has not been made available, the provision of the weights gives users way more options than with GPT-4o or Claude 3.5 Sonnet for instance.

By accessing the weights, researchers and developers get a greater overview of what is occurring under the hood and might discover and fix biases, errors or unexpected behavior throughout the model.

In addition, it’ll be much easier for users to optimize the models for specific use cases, whether or not they are good or bad.

The report notes that “the accessibility provided by open weights significantly lowers the barrier to entry for fine-tuning models for each helpful and harmful purposes. Adversarial actors can use fine-tuning to remove safeguards from open models after which freely distribute the model, ultimately limiting the worth of mitigations.”

The risks and advantages of open AI models highlighted within the report include:

  • public safety
  • Geopolitical considerations
  • Social problems
  • Competition, innovation and research

The report openly acknowledges the risks in each of those areas, but points out that the advantages outweigh them if the risks are managed.

With a closed model like GPT-4o, all of us should trust that OpenAI is doing job in its tuning efforts and never hiding potential risks. With an open model, any researcher can discover security vulnerabilities in a model or conduct a third-party audit to make sure the model is compliant.

The report states: “The availability of model weights could enable affected countries to develop more robust ecosystems for advanced AI… and undermine the goals of U.S. chip controls.”

On the positive side, nevertheless, providing model weights “could strengthen cooperation with allies and deepen recent relationships with development partners.”

The U.S. government is clearly sold on the concept of ​​open AI models, whilst it concurrently issues federal and state regulations to cut back the risks of the technology. If the Trump-Vance team wins the following election, we are going to likely see continued support for open AI, but with even less regulation.

Open weighting models could also be great for innovation, but when emerging risks surprise regulators, the genie of AI can't be put back within the bottle.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read