Mixtral
A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes. The Mixtral large language models (LLMs) are a set of pretrained generative Sparse Mixture of Experts (SMoE). Sizes: Mixtral 8x22B: To run: ollama run mixtral:8x22b Mixtral 8x22B sets a new benchmark for performance and efficiency … Read more