HomeArtificial IntelligenceSambaNova pronounces the brand new AI Samba-CoE v0.2, which already outperforms Databricks...

SambaNova pronounces the brand new AI Samba-CoE v0.2, which already outperforms Databricks DBRX

The AI ​​chip manufacturer SambaNova Systems has announced a major achievement with its Samba-CoE v0.2 Large Language Model (LLM).

Operating at a powerful 330 tokens per second, this model outperforms several notable competitor models reminiscent of Databricks' brand recent DBRX released just yesterday, MistralAI's Mixtral-8x7B, and Elon Musk's xAI's Grok-1, amongst others.

What makes this achievement particularly notable is the efficiency of the model – it achieves these speeds without compromising precision and requires only 8 sockets to operate, while alternatives require 576 sockets and operate at lower bit rates.

In fact, in our tests, the LLM provided incredibly fast responses to our input, taking 330.42 seconds for a 425-word answer concerning the Milky Way.

A matter about quantum computing yielded a similarly robust and fast answer, with a whopping 332.56 tokens delivered in a single second.

Efficiency improvements

SambaNova's emphasis on using fewer sockets while maintaining high bitrates suggests a major advance in computational efficiency and model performance.

It also pronounces the upcoming release of Samba-CoE v0.3 in collaboration with LeptonAI, indicating continued progress and innovation.

Additionally, SambaNova Systems highlights that the inspiration of those advances is predicated on open source models from Samba-1 and the Sambaverse, using a singular approach to ensemble constructing and model fusion.

This methodology not only underlies the present version, but in addition suggests a scalable and revolutionary approach for future developments.

Comparison with other models reminiscent of GoogleAI's Gemma-7B, MistralAI's Mixtral-8x7B, Meta's llama2-70B, Alibaba Group's Qwen-72B, TIIuae's Falcon-180B and BigScience's BLOOM-176B shows the competitive advantage of Samba-CoE v0 .2 on this area.

This announcement is anticipated to stimulate interest within the AI ​​and machine learning communities and stimulate discussions about efficiency, performance, and the longer term of AI model development.

Background details about SambaNova

SambaNova Systems was founded in 2017 in Palo Alto, California by three co-founders: Kunle Olukotun, Rodrigo Liang and Christopher Re.

SambaNova initially focused on developing custom AI hardware chips, but quickly expanded its ambition to incorporate a broader range of offerings, including machine learning services and comprehensive enterprise AI training, development and deployment platform referred to as SambaNova Suite in early 2023 and earlier this 12 months a 1 trillion parameter AI model, Samba-1, built from 50 smaller models in a “composition of experts.”

This evolution from a hardware-centric startup to a full-service AI innovator reflects the founders' commitment to enabling scalable, accessible AI technologies.

As SambaNova carves out its area of interest inside AI, it’s also positioning itself as a formidable competitor to established giants like Nvidia, raising a $676 million Series D in 2021 at a valuation of over $5 billion.

Today, the corporate competes alongside established firms like Nvidia and other dedicated AI chip startups like Groq.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read