HomeArtificial IntelligenceMicrosoft introduces serverless fine-tuning for its small Phi-3 language model

Microsoft introduces serverless fine-tuning for its small Phi-3 language model

Microsoft is a serious supporter and partner of OpenAI, but that doesn't mean it desires to let the latter company win the generative AI game.

As proof of this, Microsoft announced today a brand new approach to fine-tune the Phi-3 small language model without developers having to administer their very own servers and (initially) freed from charge.

Fine-tuning is the technique of adjusting an AI model via system prompts or Adjustment of the underlying weights (parameters) to attain different and more optimal behavior for specific use cases and end users and even add recent features.

What is Phi-3?

The company introduced Phi-3, a 3 billion parameter model, in April as a low-cost, enterprise-level option on which third-party developers can construct recent applications and software.

Although Phi-3 is significantly smaller than most other leading language models (Meta's Llama 3.1, for instance, has a version with 405 billion parameters—parameters are the “settings” that control the neural network's processing and responses)—Phi-3's performance was on par with OpenAI's GPT 3.5 model, in keeping with comments made to VentureBeat by SĂ©bastien Bubeck, vp of Microsoft Generative AI on the time.

Phi-3 is specifically designed to supply cost-effective performance within the areas of coding, common sense, and general knowledge.

It is now a complete family of 6 separate models with different numbers of parameters and context lengths (the variety of tokens or numerical representations of information) that the user can provide in a single input, starting from 4,000 to 128,000 – with costs starting from $0.0003 per 1,000 input tokens to $0.0005/1,000 input tokens.

However, in the event you convert it to the more typical “per million” pricing for tokens, the quantity is initially $0.3/$0.9 per 1 million tokens. That’s exactly double OpenAI’s recent GPT-4o mini prices for input and about 1.5 times as much for output tokens.

Phi-3 was designed to be protected for enterprises, with safeguards in place to scale back bias and toxicity. When it was first announced, Microsoft's Bubeck touted that it might be fine-tuned for specific enterprise use cases.

“You can usher in your data and fine-tune this general model and get amazing performance in narrow vertical ranges,” he told us.

However, at the moment there was no serverless option for fine-tuning: in the event you wanted to try this, you had to establish your personal Microsoft Azure server or download the model and run it on your personal local machine, which could not have enough disk space.

Serverless fine-tuning opens up recent possibilities

Today, nonetheless, Microsoft announced most of the people availability of its “Models-as-a-Service (serverless endpoint)” in its Azure AI development platform.

It was also announced that “Phi-3-small is now available via a serverless endpoint, allowing developers to quickly and simply start AI development without having to administer the underlying infrastructure.”

Phi-3-vision, which might process image inputs, will “soon even be available via a serverless endpoint,” in keeping with a Microsoft blog post.

But these models are simply available “as is” through Microsoft’s Azure AI development platform. Developers can construct apps on top of them, but they’ll’t create their very own versions of the models tailored to their very own use cases.

Developers in search of to do this could turn to Phi-3-mini and Phi-3-medium, in keeping with Microsoft, which will be optimized using third-party data “to create AI experiences which can be more relevant, safer and more economical for his or her users.”

“Due to their low computational footprint and cloud and edge compatibility, Phi-3 models are well suited to fine-tuning to enhance the performance of the bottom model in a wide range of scenarios, including learning a brand new skill or task (e.g., tutoring) or improving the consistency and quality of the response (e.g., tone or variety of responses in chat/Q&A),” the corporate writes.

In particular, Microsoft states that educational software company Khan Academy already uses a fine-tuned Phi-3 to enhance the performance of its Khanmigo for Teachers based on Microsoft’s Azure OpenAI Service.

A brand new price and performance battle for enterprise AI developers

Pricing for the serverless fine-tuning of Phi-3-mini-4k-instruct starts at $0.004 per 1,000 tokens ($4 per 1 million tokens), while pricing for the medium model just isn’t yet listed.

This is a transparent win for developers who need to stay within the Microsoft ecosystem, but it’s also a notable competitor to the efforts of Microsoft's own ally OpenAI to draw enterprise AI developers.

And OpenAI announced a number of days ago a free fine-tuning of GPT-4o mini with as much as 2 million tokens per day until September twenty third, for so-called “Tier 4 and 5” users of its application programming interface (API)or those that spend at the least $250 or $1000 on API credits.

Hot on the heels of Meta's release of its open-source Llama 3.1 family of products and Mistral's recent Mistral Large 2 model, each of which will be fine-tuned for various use cases, it's clear that the race to deliver compelling AI options for enterprise development is in full swing—and AI vendors are vying for developers' favor with small and enormous models.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read