Abstract
We introduce MiniMax-01 series, including MiniMax-Text-01 and MiniMax-VL-01, which are comparable to top-tier models while offering superior capabilities in processing longer contexts. The core lies in lightning attention and its efficient scaling. To maximize computational capacity, we integrate it with Mixture of Experts (MoE), creating a model with 32 experts and 456 billion total parameters, of which 45.9 billion are activated for each token. We develop an optimized parallel strategy and highly efficient computation-communication overlap techniques for MoE and lightning attention. This approach enables us to conduct efficient training and inference on models with hundreds of billions of parameters across contexts spanning millions of tokens. The context window of MiniMax-Text-01 can reach up to 1 million tokens during training and extrapolate to 4 million tokens during inference at an affordable cost. Our vision-language model, MiniMax-VL-01 is built through continued training with 512 billion vision-language tokens. Experiments on both standard and in-house benchmarks show that our models match the performance of state-of-the-art models like GPT-4o and Claude-3.5-Sonnet while offering 20-32 times longer context window. We publicly release MiniMax-01 at https://github.com/MiniMax-AI.
Community
A technical report from MiniMax. The authors are listed in alphabetical order. The model is open-sourced at https://github.com/MiniMax-AI.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding (2024)
- [CLS] Token Tells Everything Needed for Training-free Efficient MLLMs (2024)
- Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration (2025)
- B-VLLM: A Vision Large Language Model with Balanced Spatio-Temporal Tokens (2024)
- VisionZip: Longer is Better but Not Necessary in Vision Language Models (2024)
- Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical Findings (2024)
- PruneVid: Visual Token Pruning for Efficient Video Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
We made a deep dive video for this paper: https://www.youtube.com/watch?v=eh7oDAxUoPg. Happy learning 🤓 and stretching 💪 together!
Oh, and btw, we tried using Minimax for this paper deep dive, but it kept hanging on us 😅… (maybe our long text + long PDF combo was just too much? shouldn't be though…or maybe Minimax just doesn’t like deep diving itself?! 🤔) That said, their PDF-on-the-side feature is super sweet 🍭 for paper reading and live QA! 📝
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper