Model Card for VAR (Visual AutoRegressive) Transformers π₯
VAR is a new visual generation framework that makes GPT-style models surpass diffusion models for the first timeπ, and exhibits clear power-law Scaling Lawsπ like large language models (LLMs).
VAR redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction".
This repo is used for hosting VAR's checkpoints.
For more details or tutorials see https://github.com/FoundationVision/VAR.