Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
mrfakename 
posted an update Apr 10
Post
4056
Mistral AI recently released a new Mixtral model. It's another Mixture of Experts model with 8 experts, each with 22B parameters. It requires over 200GB of VRAM to run in float16, and over 70GB of VRAM to run in int4. However, individuals have been successful at finetuning it on Apple Silicon laptops using the MLX framework. It features a 64K context window, twice that of their previous models (32K).

The model was released over torrent, a method Mistral has recently often used for their releases. While the license has not been confirmed yet, a moderator on their Discord server yesterday suggested it was Apache 2.0 licensed.

Sources:
https://twitter.com/_philschmid/status/1778051363554934874
https://twitter.com/reach_vb/status/1777946948617605384

🌐 Torrent is good