ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance
Abstract
In this paper, we introduce ILLUME, a unified multimodal large language model (MLLM) that seamlessly integrates multimodal understanding and generation capabilities within a single large language model through a unified next-token prediction formulation. To address the large dataset size typically required for image-text alignment, we propose to enhance data efficiency through the design of a vision tokenizer that incorporates semantic information and a progressive multi-stage training procedure. This approach reduces the dataset size to just 15M for pretraining -- over four times fewer than what is typically needed -- while achieving competitive or even superior performance with existing unified MLLMs, such as Janus. Additionally, to promote synergistic enhancement between understanding and generation capabilities, which is under-explored in previous works, we introduce a novel self-enhancing multimodal alignment scheme. This scheme supervises the MLLM to self-assess the consistency between text descriptions and self-generated images, facilitating the model to interpret images more accurately and avoid unrealistic and incorrect predictions caused by misalignment in image generation. Based on extensive experiments, our proposed ILLUME stands out and competes with state-of-the-art unified MLLMs and specialized models across various benchmarks for multimodal understanding, generation, and editing.
Community
We introduce ILLUME, a unified MLLM seamlessly integrates multimodal understanding and generation capabilities in a single LLM. ILLUME excels among existing unified MLLMs and exhibits competitive performance compared to specialized models across a diverse range of benchmarks in multimodal understanding, generation, and editing.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Liquid: Language Models are Scalable Multi-modal Generators (2024)
- SILMM: Self-Improving Large Multimodal Models for Compositional Text-to-Image Generation (2024)
- JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation (2024)
- X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models (2024)
- LHRS-Bot-Nova: Improved Multimodal Large Language Model for Remote Sensing Vision-Language Interpretation (2024)
- TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation (2024)
- MUSE-VL: Modeling Unified VLM through Semantic Discrete Encoding (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper