Microsoft has unveiled a series of updates to its Azure AI platform, including the expansion of the Phi-3 family of small language models (SLMs).
The company has added two new models to the family – Phi-3.5-MoE and Phi-3.5-mini – which are designed to enhance efficiency and accuracy.
Among the key benefits of Microsoft’s new models are their multilingual capabilities – they now support more than 20 models.
Microsoft adds two new Phi-3 models
Phi-3.5-MoE, a 42-billion parameter Mixture of Experts model, combines 16 smaller models into one. By doing this, Microsoft is able to combine the speed and computational efficiency of smaller models with the quality and accuracy of larger ones.
Phi-3.5-mini is significantly smaller, at 3.8 billion parameters, however its multilingual capabilities unlock a broader global use case. It supports Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish and Ukrainian.
Microsoft says that Phi-3.5-mini serves as an important update over the Phi-3-mini model, launched two months ago, based on user feedback.
In addition to two new models, Microsoft has also introduced several new tools and services within Azure AI to facilitate easier extraction of insights from unstructured data.
More broadly, Microsoft will launch the AI21 Jamba 1.5 Large and Jamba 1.5 models on Azure AI models as a service, offering long context processing abilities.
Other announcements included the general availability of the VS Code extension for Azure Machine Learning and the general availability of Conversational PII Detection Service in Azure AI Language.
“We continue to invest across the Azure AI stack to bring state of the art innovation to our customers so you can build, deploy, and scale your AI solutions safely and confidently,” stated Azure AI Platform Corporate VP Eric Boyd.