--- license: apache-2.0 library_name: transformers ---

Emu3: Next-Token Prediction is All You Need

[Emu3 Team, BAAI](https://www.baai.ac.cn/english.html) | [Project Page](https://emu.baai.ac.cn) | [Paper](https://huggingface.co/papers/2409.18869) | [🤗HF Models](https://huggingface.co/collections/BAAI/emu3-66f4e64f70850ff358a2e60f) | [github](https://github.com/baaivision/Emu3) | [Demo](https://huggingface.co/spaces/BAAI/Emu3) |
arch.
We introduce **Emu3**, a new suite of state-of-the-art multimodal models trained solely with **next-token prediction**! By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences. ### Emu3 excels in both generation and perception **Emu3** outperforms several well-established task-specific models in both generation and perception tasks, surpassing flagship open models such as SDXL, LLaVA-1.6 and OpenSora-1.2, while eliminating the need for diffusion or compositional architectures.
comparison.
### Highlights - **Emu3** is capable of generating high-quality images following the text input, by simply predicting the next vision token. The model naturally supports flexible resolutions and styles. - **Emu3** shows strong vision-language understanding capabilities to see the physical world and provides coherent text responses. Notably, this capability is achieved without depending on a CLIP and a pretrained LLM. - **Emu3** simply generates a video causally by predicting the next token in a video sequence, unlike the video diffusion model as in Sora. With a video in context, Emu3 can also naturally extend the video and predict what will happen next. #### Quickstart ```python from PIL import Image from transformers import AutoTokenizer, AutoModel, AutoImageProcessor, AutoModelForCausalLM from transformers.generation.configuration_utils import GenerationConfig import torch import sys sys.path.append(PATH_TO_BAAI_Emu3-Chat_MODEL) from processing_emu3 import Emu3Processor # model path EMU_HUB = "BAAI/Emu3-Chat" VQ_HUB = "BAAI/Emu3-VisionTokenier" # prepare model and processor model = AutoModelForCausalLM.from_pretrained( EMU_HUB, device_map="cuda:0", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(EMU_HUB, trust_remote_code=True, padding_side="left") image_processor = AutoImageProcessor.from_pretrained(VQ_HUB, trust_remote_code=True) image_tokenizer = AutoModel.from_pretrained(VQ_HUB, device_map="cuda:0", trust_remote_code=True).eval() processor = Emu3Processor(image_processor, image_tokenizer, tokenizer) # prepare input text = "Please describe the image" image = Image.open("assets/demo.png") inputs = processor( text=text, image=image, mode='U', return_tensors="pt", padding="longest", ) # prepare hyper parameters GENERATION_CONFIG = GenerationConfig( pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, max_new_tokens=1024, ) # generate outputs = model.generate( inputs.input_ids.to("cuda:0"), GENERATION_CONFIG, attention_mask=inputs.attention_mask.to("cuda:0"), ) outputs = outputs[:, inputs.input_ids.shape[-1]:] print(processor.batch_decode(outputs, skip_special_tokens=True)[0]) ```