|
--- |
|
license: apache-2.0 |
|
library_name: transformers |
|
pipeline_tag: object-detection |
|
base_model: THUDM/cogvlm-grounding-generalist-hf |
|
quantized_by: Rodeszones |
|
--- |
|
|
|
# CogVLM |
|
|
|
CogVLM Grounding generalist model quantized with bitsandbytes 4 bit precision |
|
|
|
**CogVLM** is a powerful **open-source visual language model** (**VLM**). CogVLM-17B has 10 billion vision parameters and 7 billion language parameters. CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and rank the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., **surpassing or matching PaLI-X 55B**. |
|
|
|
<div align="center"> |
|
<img src="https://github.com/THUDM/CogVLM/raw/main/assets/metrics-min.png" alt="img" style="zoom: 50%;" /> |
|
</div> |
|
|
|
# My env pip list |
|
|
|
```base |
|
pip install torch==2.2.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 transformers==4.38.1 accelerate==0.27.2 sentencepiece==0.1.99 einops==0.7.0 xformers==0.0.24 protobuf==3.20.3 triton==2.1.0 bitsandbytes==0.43.0.dev0 |
|
``` |
|
For triton and bitsandbytes on windows use this files: |
|
|
|
```base |
|
pip install bitsandbytes-0.43.0.dev0-cp310-cp310-win_amd64.whl |
|
|
|
pip install triton-2.1.0-cp310-cp310-win_amd64.whl |
|
``` |
|
|
|
# Quickstart |
|
|
|
```python |
|
import torch |
|
from PIL import Image |
|
from transformers import AutoModelForCausalLM, LlamaTokenizer |
|
|
|
model_path = "'local/model/folder/path/here' or 'Rodeszones/CogVLM-grounding-generalist-hf-quant4'" |
|
|
|
|
|
tokenizer = LlamaTokenizer.from_pretrained('lmsys/vicuna-7b-v1.5') |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, |
|
torch_dtype=torch.bfloat16, |
|
low_cpu_mem_usage=True, |
|
trust_remote_code=True |
|
).eval() |
|
|
|
|
|
# chat example |
|
query = 'Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?' |
|
image = Image.open("your/image/path/here").convert('RGB') |
|
inputs = model.build_conversation_input_ids(tokenizer, query=query, history=[], images=[image]) # chat mode |
|
inputs = { |
|
'input_ids': inputs['input_ids'].unsqueeze(0).to('cuda'), |
|
'token_type_ids': inputs['token_type_ids'].unsqueeze(0).to('cuda'), |
|
'attention_mask': inputs['attention_mask'].unsqueeze(0).to('cuda'), |
|
'images': [[inputs['images'][0].to('cuda').to(torch.bfloat16)]], |
|
} |
|
gen_kwargs = {"max_length": 2048, "do_sample": False} |
|
|
|
with torch.no_grad(): |
|
outputs = model.generate(**inputs, **gen_kwargs) |
|
outputs = outputs[:, inputs['input_ids'].shape[1]:] |
|
print(tokenizer.decode(outputs[0])) |
|
|
|
# example output |
|
# a room with a ladder [[378,107,636,998]] and a blue and white towel [[073,000,346,905]].</s> |
|
# NOTE: The model's squares have dimensions of 1000 by 1000, which is important to consider. |
|
|
|
``` |
|
|
|
# (License) |
|
|
|
The code in this repository is open source under the [Apache-2.0 license](https://github.com/THUDM/CogVLM/raw/main/LICENSE), while the use of the CogVLM model weights must comply with the [Model License](https://github.com/THUDM/CogVLM/raw/main/MODEL_LICENSE). |
|
|
|
|
|
|
|
# (Citation) |
|
``` |
|
@article{wang2023cogvlm, |
|
title={CogVLM: Visual Expert for Pretrained Language Models}, |
|
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang}, |
|
year={2023}, |
|
eprint={2311.03079}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV} |
|
} |
|
``` |