Microsoft has unveiled the most recent member of its Phi family of generative AI models.
The model, called Phi-4, improves on its predecessors in several areas, in accordance with Microsoft, particularly in solving mathematical problems. This is partly as a consequence of the higher quality of the training data.
Phi-4 might be available in a really limited manner on recently released Microsoft devices starting Thursday evening Azure AI Foundry Development platform and for research purposes only under a research license agreement from Microsoft.
This is Microsoft's latest small language model with a size of 14 billion parameters and can compete with other small models resembling GPT-4o mini, Gemini 2.0 Flash and Claude 3.5 Haiku. These smaller AI models are sometimes faster and cheaper to operate, and their performance has step by step increased in recent times.
In this case, Microsoft attributes Phi-4's jump in performance to the usage of “high-quality synthetic data sets” alongside high-quality data sets with human-generated content and a few unspecified post-training improvements.
Many AI labs as of late are looking closely on the innovations they’ll make around synthetic data and post-training. Alexandr Wang, CEO of Scale AI, said in a tweet on Thursday that “we’ve got reached a pre-training data wall,” confirming several reports on the topic in recent weeks.
In particular, the Phi-4 is the primary model within the Phi series to hit the market after the departure of SĂ©bastien Bubeck. Previously, Bubeck was certainly one of the vice presidents of AI at Microsoft and a key figure in the event of the corporate's Phi model left The company joined OpenAI in October.