Unlock Editor's Digest without cost
FT editor Roula Khalaf selects her favourite stories on this weekly newsletter.
The reason for open source artificial intelligence – the concept the workings of AI models needs to be openly available for anyone to check, use and adapt – has just crossed a vital threshold.
Mark Zuckerberg, CEO of Meta, claims this week that his company's latest open-source Llama model was the primary to succeed in “frontier-level” status, meaning it's essentially on par with probably the most powerful AI from firms like OpenAI, Google and Anthropic. Starting next 12 months, Zuckerberg said, future Llama models will likely be probably the most advanced on the planet.
Whether that happens or not, the welcome and unwelcome implications of releasing such a strong technology for general use are clear to see. Models like Llama are the most effective hope of stopping a small group of enormous tech firms from tightening their stranglehold on advanced AI. But they may also put a strong technology within the hands of disinformation spreaders, fraudsters, terrorists, and rival nation states. If anyone in Washington was pondering of stopping the open spread of advanced AI, now would probably be the time to do it.
There was something unusual about Meta's rise to turn into the AI world's foremost champion of open source. Early on, the corporate once referred to as Facebook modified course and transformed itself into an open platform company where any developer can construct services, turning into one among the web's most closed “walled gardens.” Meta's open source AI isn't really open source either. The Llama models weren't released under a license recognized by the Open Software Initiative. Meta reserves the correct to exclude other large firms from using its technology.
And yet the Llama models meet most of the requirements for openness—most individuals can check or adjust the “weights” that determine how they work—and Zuckerberg’s claim that he converted to open source out of enlightened self-interest sounds credible.
Unlike Google or Microsoft, Meta doesn't sell direct access to AI models, and it could be difficult to compete directly with the corporate on this technology. But reliance on other firms' technology platforms could pose a risk – as Meta came upon within the smartphone world when Apple modified its privacy policy for the iPhone in a way that ruined Meta's business.
The alternative – promoting an open-source alternative that would gain broader support within the technology industry – is a tried-and-true strategy. The list of firms which have thrown their weight behind the newest Llama model this week suggests it’s beginning to have an effect. They include Amazon, Microsoft and Google, which supply access through their clouds.
With his claim that open source is in some ways safer than the normal proprietary alternative, Zuckerberg has harnessed a strong force. Many users need to understand how the technology they depend on works on the within, and far of the world's core infrastructure software is open source. In the words of computer security expert Bruce Schneier, “Openness = security. Only the tech giants need to persuade you otherwise.”
But despite all some great benefits of the open source approach, is it just too dangerous to release powerful AI in this way?
Meta's CEO argues that believing that the most beneficial technology will be protected against foreign rivals is a myth: China, he says, will steal the secrets anyway. For a national security establishment that subscribes to the notion that there may be such a thing as secrets that will be kept secret, that argument probably rings hole.
As for less powerful adversaries, Zuckerberg argues that have running a social network shows that the fight against malicious AI applications is an arms race that will be won. As long as the nice guys have more powerful machines than the bad guys, every little thing is high quality. But that assumption might not be accurate. In theory, anyone can rent powerful technology on demand through one among the general public cloud platforms.
One can imagine a future world where access to such enormous computing power is regulated. Like banks, cloud firms could possibly be required to stick to a “Know Your Customer” rule. There are suggestions that governments should directly control who has access to the chips needed to construct advanced AI.
This could possibly be the world we’re moving towards sooner or later. But even whether it is, there continues to be a protracted approach to go – and because of open source, freely available AI models are already on the rise.