Text Generation
Transformers
Safetensors
imp
custom_code
imp-v1-3b / README.md
Oyoy1235's picture
update
4afa441
|
raw
history blame
4.09 kB
metadata
license: apache-2.0
language:
  - en
pipeline_tag: visual-question-answering

:smiling_imp: IMP

The :smiling_imp: IMP project aims to provide a family of a strong multimodal small language models (MSLMs). Our IMP-v0-3B model is a strong MSLM with only 3B parameters, which is build upon a small yet powerful SLM Phi-2 (2.7B) and a powerful visual encoder SigLIP (0.4B), and trained on the LLaVA-v1.5 training set.

As shown in the Table below, IMP-v0-3B significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.

We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our GitHub repo. We will persistently improve our model and release the next versions to further improve model performance :)

How to use

You can use the following code for model inference. We minimize the required dependency libraries that only the transformers and torch packages are used. The format of text instructions is similar to LLaVA.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image

torch.set_default_device("cuda")

#Create model
model = AutoModelForCausalLM.from_pretrained(
    "MILVLG/imp-v0-3b", 
    torch_dtype=torch.float16, 
    device_map="auto",
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MILVLG/imp-v0-3b", trust_remote_code=True)

#Set inputs
text = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhat are the colors of the bus in the image? ASSISTANT:"
image = Image.open("images/bus.jpg")

input_ids = tokenizer(text, return_tensors='pt').input_ids
image_tensor = model.image_preprocess(image)

#Generate the answer
output_ids = model.generate(
    input_ids,
    max_new_tokens=100,
    images=image_tensor,
    use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())

Model evaluation

We perform evaluation on 8 commonly-used benchmarks to validate the effectiveness of our model, including 5 academic VQA benchmarks and 3 recent MLLM benchmarks.

Models Size VQAv2 GQA VisWiz SQA (IMG) TextVQA POPE MME MMB MM-Vet
LLaVA-v1.5-lora 7B 79.10 63.00 47.80 68.40 58.20 86.40 1476.9 66.10 30.2
TinyGPT-V 3B - 33.60 24.80 - - - - - -
LLaVA-Phi 3B 71.40 - 35.90 68.40 48.60 85.00 1335.1 59.80 28.9
MobileVLM 3B - 59.00 - 61.00 47.50 84.90 1288.9 59.60 -
MC-LLaVA-3b 3B 64.24 49.6 24.88 - 38.59 80.59 - - -
IMP-v0 (ours) 3B 79.45 58.55 50.09 69.96 59.38 88.02 1434 66.49 33.1

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

About us

Project :smiling_imp: IMP is maintained by the MILVLG group led by Prof. Zhou Yu and Jun Yu, and mainly developed by Zhenwei Shao and Xuecheng Ouyang. We hope our model may server as a strong baseline to inspire future research on MSLMs and derivative applications on mobile devices and robotics.