HomeArtificial IntelligenceOpen source Dracarys models enable generative AI-assisted coding

Open source Dracarys models enable generative AI-assisted coding

For fans of the HBO series, the term “Dracarys” has a really special meaning. “Dracarys” is used to command a dragon to breathe fire.

Although there aren’t any dragons within the true sense of the word on the earth of generative AI, Abacus.ai, The term Dracarys now has a meaning. Dracarys is the name of a brand new family of open Large Language Models (LLMs) for coding.

Abacus.ai is an AI model development platform and tools provider that’s used to using the names of fictional dragons for its technology. In February, the corporate released Smaug-72B. Smaug is the name of the dragon from the classic fantasy book The Hobbit. While Smaug is a general-purpose LLM, Dracarys was developed to optimize coding tasks.

For its first version, Abacus.ai applied its so-called “Dracarys recipe” to the 70B parameter class of the models. The recipe includes, amongst other things, optimized fine-tuning.

“It's a mix of coaching dataset and fine-tuning techniques that improve the programming capabilities of any open source LLM,” Bindu Reddy, CEO and co-founder of Abacus.ai, told VentureBeat. “We've shown that it improves each Qwen-2 72B and LLama-3.1 70b.”

Gen AI for coding tasks is a growing area

The overall artificial intelligence (AI) market within the areas of application development and coding is bustling.

The early pioneer on this area was GitHub Copilot, which helps developers with code completion and application development. Several startups, including Tabnine and Replit, have also built features that bring the ability of LLMs to developers.

And then, after all, there are the LLM providers themselves. Dracarys offers a fine-tuned version of Meta's general-purpose Llama 3.1 model. Anthropic's Claude 3.5 Sonnet has also proven to be a well-liked and competent LLM for coding in 2024.

“Claude 3.5 is a superb coding model, however it is a closed source model,” said Reddy. “Our recipe improves on the open source model and Dracarys-72B-Instruction is the most effective coding model in its class.”

The numbers behind Dracarys and its AI coding capabilities

Accordingly LiveBench Benchmarks for the brand new models, there’s a major improvement with the Dracarys recipe.

LiveBench returns an encoding rating of 32.67 for the turbo model meta-llama-3.1-70b-instruct. The version optimized with Dracarys increases the performance as much as 35.23. For qwen2, the outcomes are even higher. The existing model qwen2-72b-instruct has an encoding rating of 32.38. Using the Dracarys recipe increases this rating as much as 38.95.

While qwen2 and Llama 3.1 are currently the one models which have the Dracarys recipe, Abacus.ai plans so as to add more models in the longer term.

“We will even release the Dracarys versions for Deepseek coders and Llama-3.1 400b,” Reddy said.

How Dracarys supports enterprise coding

There are several ways developers and businesses can potentially profit from the improved coding performance that Dracarys guarantees.

Abacus.ai is currently providing the model weights for Hugging Face for each the lama And Qwen2-based models. Reddy noted that the fine-tuned models at the moment are also available as a part of Abacus.ai's enterprise offering.

“They are great options for corporations that don't wish to send their data to public APIs like OpenAI and Gemini,” Reddy said. “If there’s enough interest, we will even make Dracarys available on our hugely popular ChatLLM service, which is designed for small teams and professionals.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read