Papers
arxiv:2512.14008

Sparse-LaViDa: Sparse Multimodal Discrete Diffusion Language Models

Published on Dec 16
· Submitted by
Shufan Li
on Dec 17
Authors:
,
,
,
,
,
,

Abstract

Sparse-LaViDa accelerates Masked Discrete Diffusion Models by dynamically truncating masked tokens during inference, maintaining quality and achieving up to a 2x speedup across various tasks.

AI-generated summary

Masked Discrete Diffusion Models (MDMs) have achieved strong performance across a wide range of multimodal tasks, including image understanding, generation, and editing. However, their inference speed remains suboptimal due to the need to repeatedly process redundant masked tokens at every sampling step. In this work, we propose Sparse-LaViDa, a novel modeling framework that dynamically truncates unnecessary masked tokens at each inference step to accelerate MDM sampling. To preserve generation quality, we introduce specialized register tokens that serve as compact representations for the truncated tokens. Furthermore, to ensure consistency between training and inference, we design a specialized attention mask that faithfully matches the truncated sampling procedure during training. Built upon the state-of-the-art unified MDM LaViDa-O, Sparse-LaViDa achieves up to a 2x speedup across diverse tasks including text-to-image generation, image editing, and mathematical reasoning, while maintaining generation quality.

Community

Paper submitter

Efficient Training and Inference for unified multi-modal diffusion language models

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.14008 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.14008 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.14008 in a Space README.md to link it from this page.

Collections including this paper 1