HomeArtificial IntelligenceMeta introduces a brand new, more efficient Llama model

Meta introduces a brand new, more efficient Llama model

Meta has announced the most recent member of its Llama family of generative AI models: Llama 3.3 70B.

In one post on

“By leveraging recent advances in post-training techniques…this model improves core performance at a significantly lower cost,” Al-Dahle wrote.

Al-Dahle published a chart showing that Llama 3.3 70B outperforms Google's Gemini 1.5 Pro, OpenAI's GPT-4o and Amazon's newly released Nova Pro on a variety of industry benchmarks, including MMLU, which measures a model's ability to interpret language understand, evaluated. By email, a Meta spokesperson said the model should deliver improvements in areas reminiscent of math, general knowledge, following instructions and app usage.

Llama 3.3 70B, available for download from the AI ​​development platform Hugging Face and other sources including the official Llama websiteis Meta's latest try and dominate the AI ​​space with “open” models that might be used and commercialized for a variety of applications.

Meta's conditions restrict how certain developers can use llama models; Platforms with greater than 700 million monthly users must apply for a special license. For many, nonetheless, it’s irrelevant that Llama models should not “open” within the strict sense. For example, in keeping with Meta, Llama has recorded greater than 650 million downloads.

Meta also used Llama internally. Meta AI, the corporate's AI assistant based entirely on Llama models, now has nearly 600 million monthly energetic users. per Meta CEO Mark Zuckerberg. Zuckerberg claims that Meta AI is on target to turn into essentially the most used AI assistant on the earth.

For Meta, Lama's openness was each a blessing and a curse. In November, a report claimed that Chinese military researchers used a llama model to develop a defense chatbot. Meta responded by making its Llama models available to US defense contractors.

Meta has also raised concerns about its ability to comply with the AI ​​Act, the EU law that establishes a regulatory framework for AI, calling the law's implementation “too unpredictable” for its open disclosure strategy. A related issue for the corporate are provisions of the GDPR, the EU data protection law, that relate to AI training. Meta trains AI models on the general public data of Instagram and Facebook users who haven’t opted out – data that’s subject to GDPR guarantees in Europe.

EU regulators earlier this yr asked Meta to stop training on European user data while they assessed the corporate's GDPR compliance. Meta gave in and at the identical time advocated one open letter calls for a “modern interpretation” of the GDPR that “doesn’t reject progress.”

Not proof against the technical challenges faced by other AI labs, Meta is expanding its computing infrastructure to coach and serve future generations of llamas. The company announced Wednesday that it will construct a $10 billion AI data center in Louisiana – the most important AI data center Meta has ever built.

Zuckerberg said on Meta's fourth-quarter earnings call in August that the corporate will need 10 times more computing power to coach the following large set of Llama models, Llama 4, than is required to coach Llama 3.

Training large language models is usually a costly affair. Meta's capital expenditures increased nearly 33% to $8.5 billion within the second quarter of 2024, up from $6.4 billion a yr ago, driven by investments in servers, data centers and network infrastructure.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read