Microsoft has released its Small Language Model (SML) Phi-2, a 2.7 billion-parameter language model showcasing exceptional reasoning and language understanding abilities.
Phi-2, a Transformer-based model with a next-word prediction objective, underwent training on 1.4T tokens from a mix of Synthetic and Web datasets for NLP and coding. The training process, conducted over 14 days using 96 A100 GPUs, resulted in Phi-2, a base model without alignment through reinforcement learning from human feedback (RLHF) or fine-tuning instructions.
Despite its modest 2.7 billion parameters, Phi-2 outperforms Mistral and Llama-2 models, both at 7B and 13B parameters, across various aggregated benchmarks. Particularly noteworthy is its superior performance compared to the significantly larger 70B-parameter Llama-2 model in multi-step reasoning tasks, such as coding and math.
Furthermore, Phi-2 matches or outperforms the recently-announced Google Gemini Nano 2, despite being smaller in size.
Microsoft couldn’t help but make a subtle reference to Google’s staged demo video for Gemini, which received significant criticism. In the video, Google showcased its upcoming AI model, Gemini Ultra, solving complex physics problems and rectifying students’ errors.
Interestingly, Microsoft highlighted that despite Phi-2 likely being a fraction of the size of Gemini Ultra, it demonstrated the ability to provide accurate answers and correct students using similar prompts.
The post Microsoft Releases Phi-2, Outperforms Gemini Nano, Mistral 7B, and Llama 2 Models appeared first on Analytics India Magazine.