HomeArtificial IntelligenceVectara raises $25 million and launches Mockingbird LLM for RAG enterprise applications

Vectara raises $25 million and launches Mockingbird LLM for RAG enterprise applications

Vectaraan early pioneer of Retrieval Augmented Generation (RAG) technology, today raises $25 million in a Series A funding round as demand for its technologies continues to grow amongst enterprise users. Total funding for Vectara thus far is $53.5 million.

Vectara emerged from obscurity in October 2022, originally positioning its technology as a neural search as a service platform. It evolved its message and called the technology “Grounded Search,” which is now higher known to the broader market as RAG. The fundamentals of Grounded Search and RAG are that answers are “grounded” to a big language model (LLM) or referenced by an enterprise knowledge store, typically some sort of vector-enabled database. The Vectara platform integrates several elements to enable a RAG pipeline, including the corporate's Boomerang vector embedding engine.

In addition to the brand new funding, the corporate today announced its latest Mockingbird LLM, an LLM specifically developed for RAG.

“We're releasing a brand new launch layout model called Mockingbird that's specifically trained and optimized to be more honest in its conclusions and keep on with the facts as much as possible,” said Amr Awadallah, co-founder and CEO of Vectara, in an exclusive interview with VentureBeat.

Enterprise RAG is greater than only a vector database

As corporate interest and acceptance of RAG has increased over the past yr, many latest firms have entered the sector.

Many database technologies, including Oracle, PostgreSQL, DataStax, Neo4j, and MongoDB, to call a couple of, all support vectors and RAG use cases. The increasing availability of RAG technologies has dramatically increased competition out there. Awadallah emphasized that his company has many clear differentiators and the Vectara platform is greater than just simply connecting a vector database to an LLM.

Awadallah noted that Vectara has developed a hallucination detection model that goes beyond the fundamental RAG foundation to assist improve accuracy. Vectara's platform also provides explanations for the outcomes and includes security measures to guard against easy attacks, that are vital for regulated industries.

Another area where Vectara goals to distinguish itself from the competition is an integrated pipeline. Instead of requiring customers to assemble different components similar to a vector database, query model, and generation model, Vectara provides an integrated RAG pipeline with all of the needed components.

“Our differentiation is straightforward: now we have the capabilities required for regulated industries,” said Awadallah.

Don't kill the mockingbird, it's the option to Enterprise agents with RAG technology

With the brand new Mockingbird LLM, Vectara desires to further differentiate itself within the highly competitive marketplace for enterprise RAG.

Awadallah noted that many RAG approaches use a general-purpose LLM similar to OpenAI's GPT-4, while Mockingbird is a fine-tuned LLM specifically optimized for RAG workflows.

The advantages of the specially developed LLM include its ability to further reduce the chance of hallucinations and supply higher citations.

“It makes sure that each one the references are included accurately,” Awadallah said. “To really have good extensibility, it is best to include all of the possible citations you’ll be able to in the reply, and Mockingbird has been tuned to try this.”

Vectara has gone a step further and designed Mockingbird to be optimized for generating structured output. This structured output may very well be in a format like JSON, which is becoming increasingly vital for enabling agent-driven AI workflows.

“When you begin counting on a RAG pipeline to call APIs, you're going to be calling an API to perform some kind of agent-based AI activity,” Awadallah said. “You absolutely must structure that output in the shape of an API call, and that's what we support.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read