Mistral AI recently released a new Mixtral model. It's another Mixture of Experts model with 8 experts, each with 22B parameters. It requires over 200GB of VRAM to run in float16, and over 70GB of VRAM to run in int4. However, individuals have been successful at finetuning it on Apple Silicon laptops using the MLX framework. It features a 64K context window, twice that of their previous models (32K).
The model was released over torrent, a method Mistral has recently often used for their releases. While the license has not been confirmed yet, a moderator on their Discord server yesterday suggested it was Apache 2.0 licensed.