Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens
Abstract
Image tokenizers form the foundation of modern text-to-image generative models but are notoriously difficult to train. Furthermore, most existing text-to-image models rely on large-scale, high-quality private datasets, making them challenging to replicate. In this work, we introduce Text-Aware Transformer-based 1-Dimensional Tokenizer (TA-TiTok), an efficient and powerful image tokenizer that can utilize either discrete or continuous 1-dimensional tokens. TA-TiTok uniquely integrates textual information during the tokenizer decoding stage (i.e., de-tokenization), accelerating convergence and enhancing performance. TA-TiTok also benefits from a simplified, yet effective, one-stage training process, eliminating the need for the complex two-stage distillation used in previous 1-dimensional tokenizers. This design allows for seamless scalability to large datasets. Building on this, we introduce a family of text-to-image Masked Generative Models (MaskGen), trained exclusively on open data while achieving comparable performance to models trained on private data. We aim to release both the efficient, strong TA-TiTok tokenizers and the open-data, open-weight MaskGen models to promote broader access and democratize the field of text-to-image masked generative models.
Community
We introduce TA-TiTok, a novel text-aware, transformer-based 1D tokenizer capable of processing both discrete and continuous tokens while ensuring accurate alignment between reconstructions and textual descriptions. Building upon TA-TiTok, we present MaskGen, a family of text-to-image masked generative models trained exclusively on open data. MaskGen achieves performance on par with models trained on proprietary datasets, while significantly reducing training costs and delivering substantially faster inference speeds.
Project page: https://tacju.github.io/projects/maskgen.html
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer (2024)
- Factorized Visual Tokenization and Generation (2024)
- MuLan: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost (2024)
- Language-Guided Image Tokenization for Generation (2024)
- CAT: Content-Adaptive Image Tokenization (2025)
- Liquid: Language Models are Scalable Multi-modal Generators (2024)
- Hierarchical Vision-Language Alignment for Text-to-Image Generation via Diffusion Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper