HomeIndustriesOpen Source Initiative contradicts Meta regarding “open” AI

Open Source Initiative contradicts Meta regarding “open” AI

The Open Source Initiative (OSI) has published an updated draft definition of open source AI and stated that Meta's models don’t fall under it, despite the corporate's claims.

Mark Zuckerberg has been open about Meta's commitment to what he sees as open-source AI. While models like Llama 3.1 are less opaque than OpenAI or Google's proprietary models, discussions within the OSI community suggest that Meta is using the term incorrectly.

At a Public online town hall event On Friday, the OSI discussed the standards it believes a very open source AI model should meet. The OSI refers to those criteria because the “4 Freedoms” and says that an open source AI is “an AI system made available under conditions and in a way that grants the next freedoms:

  1. Use the system for any purpose without asking for permission.
  2. Study how the system works and check its components.
  3. Modify the system for any purpose, including changing the output.
  4. Release the system in order that others can use it for any purpose, with or without modifications.”

In order to switch an AI model, the open AI definition states that the weights and source code ought to be open and the training dataset ought to be available.

Meta's license places some restrictions on the usage of its models, and it has refused to release the training data it used to coach its models. If one accepts that the OSI is the guardian of what “open source” means, then because of this Meta is twisting the reality when it calls its models “open.”

OSI is a California-based nonprofit that relies on community input to develop open-source standards. Some members of that community have accused Mark Zuckerberg of “openwashing” Meta's models and pushing the industry to just accept his version as an alternative of the OSI definition.

Shuji Sado, chairman of the Open Source Group Japan, said, “It is feasible that Zuckerberg has a unique definition of open source than we do,” and suggested that the unclear legal situation regarding AI training data and copyright may very well be the rationale.

Words are essential

This may all sound like a dispute over semantics, but depending on what definition the AI ​​industry adopts, it could have serious legal consequences.

Meta has had a tough time navigating EU GDPR laws due to its insatiable appetite for users’ social media data. Some say Meta’s loose definition of “open source AI” is an attempt to avoid latest laws just like the I HAVE Act.

The law provides a limited exemption for general purpose AI models (GPAIMs) released under open source licenses. These models are exempt from certain transparency requirements but must still provide a summary of the content used to coach the model.

On the opposite hand, California's proposed AI security bill, SB 1047, discourages firms like Meta from adapting their models to the OSI definition. The bill mandates complex security protocols for “open” models and makes developers accountable for harmful modifications and misuse by malicious actors.

SB 1047 defines open source AI tools as “artificial intelligence models which can be made freely available and might be freely modified and shared.” Does this mean that an AI model that might be fine-tuned by a user is “open,” or does the definition only apply if the model meets the entire OSI criteria?

For now, the vagueness gives Meta the marketing advantage and the space to barter laws. At some point, the industry could have to choose a definition. Will or not it’s defined by a big technology company like Meta or by a community-driven organization just like the OSI?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read