HomeNewsSambaNova now offers a bundle of generative AI models

SambaNova now offers a bundle of generative AI models

SambaNova, an AI chip startup that has raised over $1.1 billion in VC funding to this point, is taking up OpenAI — and competitors — with a brand new generative AI product aimed toward enterprise customers.

SambaNova today announced Samba-1, an AI-powered system designed for tasks akin to text rewriting, coding, language translation and more. The company calls the architecture a “composite of experts” – a slang term for a bundle of open-source generative AI models, 56 in total.

Rodrigo Liang, co-founder and CEO of SambaNova, says Samba-1 enables firms to optimize and address diverse AI use cases while avoiding the challenges of ad hoc implementation of AI systems.

“Samba-1 is totally modular and allows firms so as to add recent models asynchronously… without eliminating their previous investment,” Liang said in an interview with TechCrunch. “At the identical time, they’re iterative, extensible and straightforward to update, giving our customers scope for personalisation as recent models are integrated.”

Liang is a great salesman and what he says is promising. But is Samba-1 superior to the numerous, many other AI systems for business tasks, least of all OpenAI's models?

It is determined by the use case.

The major purported advantage of Samba-1 is that customers have control over how prompts and requests are routed to Samba-1 since it is a set of independently trained models fairly than a single large model. A request made to a big model like GPT-4 is one-way – through GPT-4. However, a request made to Samba-1 goes in one among the directions (to one among the 56 models that make up Samba-1) depending on the principles and policies a customer sets.

This multi-model strategy also reduces the associated fee of fine-tuning a customer's data, in response to Liang, because customers only must worry about fine-tuning individual or small groups of models fairly than a large-scale model. And—in theory—it could lead on to more reliable (e.g., less hallucination-induced) responses to prompts, he says, since one model's responses might be in comparison with the others' responses—albeit on the expense of additional computing power.

“With this…architecture, you don't need to break larger tasks into smaller ones, and subsequently you may train many smaller models,” Liang said, adding that Samba-1 might be deployed on-premises or in a hosted environment depending on customer needs. “With a big model, the computing power per (request) is higher, so the training costs are higher. (Samba-1’s) architecture reduces training costs.”

I might counter that many vendors, including OpenAI, offer attractive prices for fine-tuning large generative models, and that several startups, Martian and Credal, provide tools to route prompts between third-party models based on manually programmed or automated rules.

But what SambaNova sells is just not a novelty per se. Rather, it’s a set-it-and-forget-it package – a full-stack solution with the whole lot, including AI chips, to construct AI applications. And for some firms, that may be more attractive than what else is on the table.

“Samba-1 offers each company its own customized GPT model that “privatizes” its data and adapts to the needs of its organization,” said Liang. “The models are trained on our customers’ private data, hosted on a single (server) rack, at a tenth of the associated fee of different solutions.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read