|
--- |
|
language: |
|
- zh |
|
- en |
|
--- |
|
# VisCPM |
|
简体中文 | [English](README_en.md) |
|
|
|
<p align="center"> |
|
<p align="left"> |
|
<a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-dfd.svg"></a> |
|
<a href=""><img src="https://img.shields.io/badge/python-3.8+-aff.svg"></a> |
|
</p> |
|
|
|
`VisCPM` is a family of open-source large multimodal models, which support multimodal conversational capabilities (`VisCPM-Chat` model) and text-to-image generation capabilities (`VisCPM-Paint` model) in both Chinese and English, achieving state-of-the-art peformance among Chinese open-source multimodal models. `VisCPM` is trained based on the large language model [CPM-Bee](https://github.com/OpenBMB/CPM-Bee) with 10B parameters, fusing visual encoder (Q-Former) and visual decoder (Diffusion-UNet) to support visual inputs and outputs. Thanks to the good bilingual capability of CPM-Bee, `VisCPM` can be pre-trained with English multimodal data only and well generalize to achieve promising Chinese multimodal capabilities. |
|
|
|
`VisCPM`是一个开源的多模态大模型系列,支持中英双语的多模态对话能力(`VisCPM-Chat`模型)和文到图生成能力(`VisCPM-Paint`模型),在中文多模态开源模型中达到最佳水平。`VisCPM`基于百亿参数量语言大模型[CPM-Bee](https://github.com/OpenBMB/CPM-Bee)(10B)训练,融合视觉编码器(`Q-Former`)和视觉解码器(`Diffusion-UNet`)以支持视觉信号的输入和输出。得益于`CPM-Bee`底座优秀的双语能力,`VisCPM`可以仅通过英文多模态数据预训练,泛化实现优秀的中文多模态能力。 |
|
|
|
## VisCPM-Chat |
|
`VisCPM-Chat`支持面向图像进行中英双语多模态对话。该模型使用`Q-Former`作为视觉编码器,使用CPM-Bee(10B)作为语言交互基底模型,并通过语言建模训练目标融合视觉和语言模型。模型训练包括预训练和指令精调两阶段: |
|
|
|
* 预训练:我们使用约100M高质量英文图文对数据对`VisCPM-Chat`进行了预训练,数据包括CC3M、CC12M、COCO、Visual Genome、Laion等。在预训练阶段,语言模型参数保持固定,仅更新`Q-Former`部分参数,以支持大规模视觉-语言表示的高效对齐。 |
|
|
|
* 指令精调:我们采用[LLaVA-150K](https://llava-vl.github.io/)英文指令精调数据,并混合相应翻译后的中文数据对模型进行指令精调,以对齐模型多模态基础能力和用户使用意图。在指令精调阶段,我们更新全部模型参数,以提升指令精调数据的利用效率。有趣的是,我们发现即使仅采用英文指令数据进行指令精调,模型也可以理解中文问题,但仅能用英文回答。这表明模型的多语言多模态能力已经得到良好的泛化。在指令精调阶段进一步加入少量中文翻译数据,可以将模型回复语言和用户问题语言对齐。 |
|
|
|
我们在LLaVA英文测试集和翻译的中文测试集对模型进行了评测,该评测基准考察模型在开放域对话、图像细节描述、复杂推理方面的表现,并使用GPT-4进行打分。可以观察到,`VisCPM-Chat`在中文多模态能力方面取得了最佳的平均性能,在通用域对话和复杂推理表现出色,同时也表现出了不错的英文多模态能力。 |
|
|
|
<table> |
|
<tr> |
|
<td align="center" rowspan="2" colspan="2">模型</td> |
|
<td align="center" colspan="4">英文</td> |
|
<td align="center" colspan="4">中文</td> |
|
</tr> |
|
<tr> |
|
<td align="center">多模态对话</td> |
|
<td align="center">细节描述</td> |
|
<td align="center">复杂推理</td> |
|
<td align="center">平均</td> |
|
<td align="center">多模态对话</td> |
|
<td align="center">细节描述</td> |
|
<td align="center">复杂推理</td> |
|
<td align="center">平均</td> |
|
</tr> |
|
<tr> |
|
<td align="center" rowspan="3">英文模型</td> |
|
<td align="center">MiniGPT4</td> |
|
<td align="center">65</td> |
|
<td align="center">67.3</td> |
|
<td align="center">76.6</td> |
|
<td align="center">69.7</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
</tr> |
|
<tr> |
|
<td align="center">InstructBLIP</td> |
|
<td align="center">81.9</td> |
|
<td align="center">68</td> |
|
<td align="center">91.2</td> |
|
<td align="center">80.5</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
</tr> |
|
<tr> |
|
<td align="center">LLaVA</td> |
|
<td align="center">89.5</td> |
|
<td align="center">70.4</td> |
|
<td align="center">96.2</td> |
|
<td align="center">85.6</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
</tr> |
|
<tr> |
|
<td align="center" rowspan="4">中英双语</td> |
|
<td align="center">mPLUG-Owl </td> |
|
<td align="center">64.6</td> |
|
<td align="center">47.7</td> |
|
<td align="center">80.1</td> |
|
<td align="center">64.2</td> |
|
<td align="center">76.3</td> |
|
<td align="center">61.2</td> |
|
<td align="center">77.8</td> |
|
<td align="center">72</td> |
|
</tr> |
|
<tr> |
|
<td align="center">VisualGLM</td> |
|
<td align="center">62.4</td> |
|
<td align="center">63</td> |
|
<td align="center">80.6</td> |
|
<td align="center">68.7</td> |
|
<td align="center">76.6</td> |
|
<td align="center">87.8</td> |
|
<td align="center">83.6</td> |
|
<td align="center">82.7</td> |
|
</tr> |
|
<tr> |
|
<td align="center">Ziya (LLaMA 13B)</td> |
|
<td align="center">82.7</td> |
|
<td align="center">69.9</td> |
|
<td align="center">92.1</td> |
|
<td align="center">81.7</td> |
|
<td align="center">85</td> |
|
<td align="center">74.7</td> |
|
<td align="center">82.4</td> |
|
<td align="center">80.8</td> |
|
</tr> |
|
<tr> |
|
<td align="center">VisCPM-Chat</td> |
|
<td align="center">83.3</td> |
|
<td align="center">68.9</td> |
|
<td align="center">90.5</td> |
|
<td align="center">81.1</td> |
|
<td align="center">92.7</td> |
|
<td align="center">76.1</td> |
|
<td align="center">89.2</td> |
|
<td align="center">86.3</td> |
|
</tr> |
|
</table> |
|
|
|
## VisCPM-Paint |
|
`VisCPM-Paint`支持中英双语的文到图生成。该模型使用CPM-Bee(10B)作为文本编码器,使用`UNet`作为图像解码器,并通过扩散模型训练目标融合语言和视觉模型。在训练过程中,语言模型参数始终保持固定。我们使用[Stable Diffusion 2.1](https://github.com/Stability-AI/stablediffusion)的UNet参数初始化视觉解码器,并通过逐步解冻其中关键的桥接参数将其与语言模型融合:首先训练文本表示映射到视觉模型的线性层,然后进一步解冻`UNet`的交叉注意力层。该模型在[LAION 2B](https://laion.ai/)英文图文对数据上进行了训练。 |
|
|
|
与`VisCPM-Chat`类似,我们发现得益于CPM-Bee的双语能力,`VisCPM-Paint`可以仅通过英文图文对训练,泛化实现良好的中文文到图生成能力,达到中文开源模型的最佳效果。通过进一步加入20M清洗后的原生中文图文对数据,以及120M翻译到中文的图文对数据,模型的中文文到图生成能力可以获得进一步提升。我们在MSCOCO上采样了3万张图片,计算了FID(Fréchet Inception Distance)和Clip Score,前者用于评估生成图片的质量,后面用于评估生成的图片与输入的匹配程度。 |
|
|
|
<table> |
|
<tr> |
|
<td align="center" rowspan="2">模型</td> |
|
<td align="center" colspan="2">英文</td> |
|
<td align="center" colspan="2">中文</td> |
|
</tr> |
|
<tr> |
|
<td align="center">FID↓</td> |
|
<td align="center">CLIP Score↑</td> |
|
<td align="center">FID↓</td> |
|
<td align="center">CLIP Score↑</td> |
|
</tr> |
|
<tr> |
|
<td align="center">AltDiffusion</td> |
|
<td align="center">17.16</td> |
|
<td align="center">25.24</td> |
|
<td align="center">16.09</td> |
|
<td align="center">24.05</td> |
|
</tr> |
|
<tr> |
|
<td align="center">TaiyiDiffusion</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
<td align="center">15.58</td> |
|
<td align="center">22.69</td> |
|
</tr> |
|
<tr> |
|
<td align="center">Stable Diffusion</td> |
|
<td align="center">9.08</td> |
|
<td align="center">26.22</td> |
|
<td align="center">-</td> |
|
<td align="center">-</td> |
|
</tr> |
|
<tr> |
|
<td align="center">VisCPM-Paint-en</td> |
|
<td align="center">9.51</td> |
|
<td align="center">25.35</td> |
|
<td align="center">10.86</td> |
|
<td align="center">23.38</td> |
|
</tr> |
|
<tr> |
|
<td align="center">VisCPM-Paint-zh</td> |
|
<td align="center">9.98</td> |
|
<td align="center">25.04</td> |
|
<td align="center">9.65</td> |
|
<td align="center">24.17</td> |
|
</tr> |
|
</table> |
|
|
|
# 安装 |
|
|
|
```Shell |
|
conda create -n viscpm python=3.10 -y |
|
conda activate viscpm |
|
pip install setuptools |
|
pip install diffusers jieba matplotlib numpy opencv_python |
|
pip install pandas Pillow psutil pydantic scipy |
|
pip install torch==1.13.1 torchscale==0.2.0 torchvision==0.14.1 timm |
|
pip install transformers==4.28.0 |
|
pip install tqdm typing_extensions |
|
pip install git+https://github.com/thunlp/OpenDelta.git |
|
pip install git+https://github.com/OpenBMB/CPM-Bee.git#egg=cpm-live&subdirectory=src |
|
``` |
|
|
|
VisCPM需要单卡40GB以上的GPU运行,我们会在尽快更新更加节省显存的推理方式。 |
|
|
|
## 使用 |
|
|
|
```python |
|
>>> from transformers import AutoModel, AutoTokenizer, AutoImageProcessor |
|
>>> from PIL import Image |
|
|
|
>>> tokenizer = AutoTokenizer.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True) |
|
>>> processor = AutoImageProcessor.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True) |
|
>>> model = AutoModel.from_pretrained('openbmb/VisCPM-Chat', trust_remote_code=True).to('cuda') |
|
|
|
>>> data = [{ |
|
>>> 'context': '', |
|
>>> 'question': 'describe this image in detail.', |
|
>>> 'image': tokenizer.unk_token * model.query_num, |
|
>>> '<ans>': '' |
|
>>> }] |
|
>>> image = Image.open('case.jpg') |
|
>>> result = model.generate(data, tokenizer, processor, image) |
|
>>> print(result[0]['<ans>']) |
|
这幅图片显示了一群热气球在天空中飞行。这些热气球漂浮在不同的地方,包括山脉、城市和乡村地区。 |
|
``` |