Phi-3.5

Phi-3.5-mini is a lightweight, cutting-edge open model based on the same datasets as Phi-3, utilizing synthetic data and carefully curated publicly available websites with an emphasis on high-quality, reasoning-rich information.

Part of the Phi-3 model family, it supports a 128K token context length. The model has undergone significant improvements through supervised fine-tuning, proximal policy optimization, and direct preference optimization, ensuring accurate instruction following and strong safety protocols.

With its 128K token context length, Phi-3.5-mini excels in tasks requiring long contexts, such as summarizing lengthy documents or meetings, answering questions based on long texts, and retrieving information from extended documents.

Run Phi-3.5

ollama run phi3.5

Primary Use Cases

The model is designed for commercial and research applications across various languages. It is ideal for general-purpose AI systems and applications that require:

  • Memory/compute-constrained environments
  • Low-latency scenarios
  • Strong reasoning capabilities, particularly in coding, math, and logic

This model aims to accelerate research in language and multimodal models, serving as a foundation for building generative AI-powered features.

Use Case Considerations

The model may not be suitable for all downstream tasks. Developers should account for common language model limitations and carefully assess accuracy, safety, and fairness before implementing it in specific use cases, especially in high-risk scenarios. It is essential to comply with relevant laws and regulations, such as privacy and trade compliance, applicable to the intended use case.