Qwen2

Qwen2 is trained on data across 29 languages, including English and Chinese. It comes in four different parameter sizes: 0.5B, 1.5B, 7B, and 72B. The 7B and 72B models feature an extended context length of up to 128k tokens.

ModelQwen2-0.5BQwen2-1.5BQwen2-7BQwen2-72B
Params0.49B1.54B7.07B72.71B
Non-Emb Params0.35B1.31B5.98B70.21B
GQATrueTrueTrueTrue
Tie EmbeddingTrueTrueFalseFalse
Context Length32K32K128K128K
Ollama qwen2 models

Supported languages

This is in addition to English and Chinese

RegionsLanguages
Western EuropeGerman, French, Spanish, Portuguese, Italian, Dutch
Eastern & Central EuropeRussian, Czech, Polish
Middle EastArabic, Persian, Hebrew, Turkish
Eastern AsiaJapanese, Korean
South-Eastern AsiaVietnamese, Thai, Indonesian, Malay, Lao, Burmese, Cebuano, Khmer, Tagalog
Southern AsiaHindi, Bengali, Urdu
Qwen2 is a cutting-edge collection of foundational and instruction-tuned language models, spanning from 0.5 to 72 billion parameters, and includes both dense and Mixture-of-Experts models. It surpasses most earlier open-weight models, including its predecessor Qwen1.5, and delivers competitive results against proprietary models across multiple benchmarks, excelling in language comprehension, text generation, multilingual support, coding, mathematics, and reasoning.

Run qwen2 with Ollama

ollama run qwen2

License

All models with the exception of Qwen2 72B (both instruct and base models) are Apache 2.0 licensed.

Qwen2 72B model still uses the original Qianwen License.

HuggingFace link:

https://huggingface.co/Qwen