Qwen2 is trained on data across 29 languages, including English and Chinese. It comes in four different parameter sizes: 0.5B, 1.5B, 7B, and 72B. The 7B and 72B models feature an extended context length of up to 128k tokens.
Model | Qwen2-0.5B | Qwen2-1.5B | Qwen2-7B | Qwen2-72B |
---|---|---|---|---|
Params | 0.49B | 1.54B | 7.07B | 72.71B |
Non-Emb Params | 0.35B | 1.31B | 5.98B | 70.21B |
GQA | True | True | True | True |
Tie Embedding | True | True | False | False |
Context Length | 32K | 32K | 128K | 128K |
Supported languages
This is in addition to English and Chinese
Regions | Languages |
---|---|
Western Europe | German, French, Spanish, Portuguese, Italian, Dutch |
Eastern & Central Europe | Russian, Czech, Polish |
Middle East | Arabic, Persian, Hebrew, Turkish |
Eastern Asia | Japanese, Korean |
South-Eastern Asia | Vietnamese, Thai, Indonesian, Malay, Lao, Burmese, Cebuano, Khmer, Tagalog |
Southern Asia | Hindi, Bengali, Urdu |
Run qwen2 with Ollama
ollama run qwen2
License
All models with the exception of Qwen2 72B (both instruct and base models) are Apache 2.0 licensed.
Qwen2 72B model still uses the original Qianwen License.
HuggingFace link: