|
--- |
|
inference: false |
|
license: apache-2.0 |
|
--- |
|
|
|
# Model Card |
|
|
|
<p align="center"> |
|
<img src="./icon.png" alt="Logo" width="350"> |
|
</p> |
|
|
|
π [Technical report](https://arxiv.org/abs/2402.11530) | π [Code](https://github.com/BAAI-DCAI/Bunny) | π° [Demo](http://bunny.dataoptim.org/) |
|
|
|
This is Bunny-v1.1-4B. |
|
|
|
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Phi-3-mini, Llama-3-8B, Phi-1.5, StableLM-2 and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source. |
|
|
|
We provide Bunny-v1.1-4B, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) with [S \\(^{2}\\)-Wrapper](https://github.com/bfshi/scaling_on_scales), supporting 1152x1152 resolution. More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny). |
|
|
|
| | MME \\(^{\text{P}}\\) | MME \\(^{\text{C}}\\) | MMB \\(^{\text{T/D}}\\) | MMB-CN \\(^{\text{T/D}}\\) |SEED(-IMG) | MMMU \\(^{\text{V/T}}\\) | VQA \\(^{\text{v2}}\\) | GQA | SQA \\(^{\text{I}}\\) | POPE | |
|
| ------------------ | :--------------: | :--------------: |:--------------: | :----------------: | :--: | :-----------------: | :---------------: | :--: | :--------------: | :--: | |
|
| Bunny-v1.1-4B | 1503.9 | 362.9 | 74.1/74.1 |66.3/64.8 | 64.6(71.7) | 40.2/38.8 | 81.7 | 63.4 | 76.3 | 87.0 | |
|
|
|
|
|
|
|
|
|
# Quickstart |
|
|
|
Here we show a code snippet to show you how to use the model with transformers. |
|
|
|
Before running the snippet, you need to install the following dependencies: |
|
|
|
```shell |
|
pip install torch transformers accelerate pillow |
|
``` |
|
If the CUDA memory is enough, it would be faster to execute this snippet by setting `CUDA_VISIBLE_DEVICES=0`. |
|
|
|
```python |
|
import torch |
|
import transformers |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from PIL import Image |
|
import warnings |
|
|
|
# disable some warnings |
|
transformers.logging.set_verbosity_error() |
|
transformers.logging.disable_progress_bar() |
|
warnings.filterwarnings('ignore') |
|
|
|
# set device |
|
device = 'cuda' # or cpu |
|
torch.set_default_device(device) |
|
|
|
# create model |
|
model = AutoModelForCausalLM.from_pretrained( |
|
'BAAI/Bunny-v1_1-4B', |
|
torch_dtype=torch.float16, # float32 for cpu |
|
device_map='auto', |
|
trust_remote_code=True) |
|
tokenizer = AutoTokenizer.from_pretrained( |
|
'BAAI/Bunny-v1_1-4B', |
|
trust_remote_code=True) |
|
|
|
# text prompt |
|
prompt = 'Why is the image funny?' |
|
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:" |
|
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')] |
|
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][1:], dtype=torch.long).unsqueeze(0).to(device) |
|
|
|
# image, sample images can be found in images folder |
|
image = Image.open('example_2.png') |
|
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device) |
|
|
|
# generate |
|
output_ids = model.generate( |
|
input_ids, |
|
images=image_tensor, |
|
max_new_tokens=100, |
|
use_cache=True, |
|
repetition_penalty=1.0 # increase this to avoid chattering |
|
)[0] |
|
|
|
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()) |
|
``` |
|
|
|
|