JustinLin610
commited on
Commit
•
e0175b5
1
Parent(s):
92b425b
Upload README.md
Browse files
README.md
CHANGED
@@ -18,61 +18,123 @@ inference: false
|
|
18 |
<br>
|
19 |
|
20 |
<p align="center">
|
21 |
-
Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>  | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>  |
|
22 |
-
|
|
|
23 |
</p>
|
24 |
<br>
|
25 |
|
26 |
-
**Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL
|
27 |
-
- **强大的性能**:在四大类多模态任务的标准英文测评中(Zero-shot Caption/VQA/DocVQA/Grounding)上,均取得同等通用模型大小下最好效果;
|
28 |
-
- **多语言对话模型**:天然支持多语言对话,端到端支持图片里中英双语的长文本识别;
|
29 |
-
- **多图交错对话**:支持多图输入和比较,指定图片问答,多图文学创作等;
|
30 |
-
- **首个支持中文开放域定位的通用模型**:通过中文开放域语言表达进行检测框标注;
|
31 |
-
- **细粒度识别和理解**:相比于目前其它开源LVLM使用的224分辨率,Qwen-VL是首个开源的448分辨率的LVLM模型。更高分辨率可以提升细粒度的文字识别、文档问答和检测框标注。
|
32 |
|
33 |
**Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
|
34 |
-
- **Strong performance**: It significantly surpasses existing open-source Large Vision Language Models (LVLM) under similar scale settings on multiple English evaluation benchmarks (including Zero-shot caption, VQA, DocVQA, and Grounding).
|
35 |
-
- **Multi-lingual LVLM support text recognization**: Qwen-VL naturally supports multi-lingual conversation, and it promotes end-to-end recognition of Chinese and English bi-lingual text in images.
|
36 |
-
- **Multi-image interleaved conversations**: This feature allows for the input and comparison of multiple images, as well as the ability to specify questions related to the images and engage in multi-image storytelling.
|
37 |
-
- **First generalist model support grounding in Chinese**: Detecting bounding boxes through open-domain language expression in both Chinese and English.
|
38 |
-
- **Fine-grained recognization and understanding**: Compared to the 224 resolution currently used by other open-source LVLM, the 448 resolution promotes fine-grained text recognition, document QA, and bounding box annotation.
|
39 |
|
40 |
-
目前,我们提供了
|
41 |
-
|
42 |
-
- Qwen-VL-Chat
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
-
- Qwen-VL: The pre-trained LVLM model uses Qwen-7B as the initialization of the LLM, and [Openclip ViT-bigG](https://github.com/mlfoundations/open_clip) as the initialization of the visual encoder. And connects them with a randomly initialized cross-attention layer. Qwen-VL was trained on about 1.5B image-text paired data.
|
48 |
-
- Qwen-VL-Chat: A multimodal LLM-based AI assistant, which is trained with alignment techniques.
|
49 |
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
## 评测
|
53 |
|
54 |
我们从两个角度评测了两个模型的能力:
|
55 |
-
1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
|
56 |
-
- Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
|
57 |
-
- General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
|
58 |
-
- Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
|
59 |
-
- Referring Expression Compression:评测模型给定物体描述画检测框的能力;
|
60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
|
|
66 |
评测结果如下:
|
67 |
|
68 |
We evaluated the model's ability from two perspectives:
|
|
|
69 |
1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
|
|
|
70 |
- Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
|
71 |
- General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
|
72 |
- Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
|
73 |
- Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
|
74 |
-
|
75 |
2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
|
|
|
76 |
- The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
|
77 |
- In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
|
78 |
- The benchmark includes both English and Chinese versions.
|
@@ -85,7 +147,8 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
|
|
85 |
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
|
86 |
<p>
|
87 |
|
88 |
-
### Zero-shot Captioning & General VQA
|
|
|
89 |
<table>
|
90 |
<thead>
|
91 |
<tr>
|
@@ -242,11 +305,10 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
|
|
242 |
|
243 |
- 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
|
244 |
- 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
|
245 |
-
|
246 |
- For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
|
247 |
- For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
|
248 |
|
249 |
-
### Text-oriented VQA
|
250 |
|
251 |
<table>
|
252 |
<thead>
|
@@ -316,11 +378,11 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
|
|
316 |
|
317 |
- 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
|
318 |
- 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
|
319 |
-
|
320 |
- In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
|
321 |
- Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
|
322 |
|
323 |
-
### Referring Expression Comprehension
|
|
|
324 |
<table>
|
325 |
<thead>
|
326 |
<tr>
|
@@ -490,13 +552,13 @@ Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has
|
|
490 |
|
491 |
We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
|
492 |
|
493 |
-
### Chat
|
494 |
|
495 |
TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
|
496 |
|
497 |
TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
|
498 |
|
499 |
-
#### English
|
500 |
|
501 |
| Model | Score |
|
502 |
|---------------|-------|
|
@@ -508,7 +570,7 @@ TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities o
|
|
508 |
| LLaVA | 602.7 |
|
509 |
| Qwen-VL-Chat | 645.2 |
|
510 |
|
511 |
-
#### Chinese
|
512 |
|
513 |
| Model | Score |
|
514 |
|---------------|-------|
|
@@ -518,95 +580,39 @@ TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities o
|
|
518 |
Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
|
519 |
|
520 |
Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
|
|
|
521 |
|
522 |
-
##
|
523 |
-
|
524 |
-
* python 3.8及以上版本
|
525 |
-
* pytorch 1.12及以上版本,推荐2.0及以上版本
|
526 |
-
* 建议使用CUDA 11.4及以上(GPU用户需考虑此选项)
|
527 |
-
|
528 |
-
* python 3.8 and above
|
529 |
-
* pytorch 1.12 and above, 2.0 and above are recommended
|
530 |
-
* CUDA 11.4 and above are recommended (this is for GPU users)
|
531 |
-
|
532 |
-
## Quickstart
|
533 |
-
|
534 |
-
我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用 Qwen-VL 和 Qwen-VL-Chat。
|
535 |
-
|
536 |
-
在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
|
537 |
-
|
538 |
-
Below, we provide simple examples to show how to use Qwen-VL and Qwen-VL-Chat with 🤗 Transformers.
|
539 |
-
|
540 |
-
Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
|
541 |
-
|
542 |
-
```bash
|
543 |
-
pip install -r requirements.txt
|
544 |
-
```
|
545 |
|
546 |
-
|
547 |
|
548 |
-
|
|
|
549 |
|
550 |
-
|
551 |
|
552 |
-
|
553 |
|
554 |
-
|
555 |
-
|
556 |
-
from transformers.generation import GenerationConfig
|
557 |
-
import torch
|
558 |
-
torch.manual_seed(1234)
|
559 |
|
560 |
-
|
561 |
|
562 |
-
|
563 |
-
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="auto", trust_remote_code=True, bf16=True).eval()
|
564 |
-
# use fp16
|
565 |
-
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="auto", trust_remote_code=True, fp16=True).eval()
|
566 |
-
# use cpu only
|
567 |
-
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="cpu", trust_remote_code=True).eval()
|
568 |
-
# use cuda device
|
569 |
-
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="cuda", trust_remote_code=True).eval()
|
570 |
|
571 |
-
|
572 |
-
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL", trust_remote_code=True)
|
573 |
|
574 |
-
|
575 |
-
|
576 |
-
|
577 |
-
|
578 |
-
|
579 |
-
|
580 |
-
|
581 |
-
response = tokenizer.decode(pred.cpu()[0], skip_special_tokens=False)
|
582 |
-
print(response)
|
583 |
-
# <img>https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg</img>Generate the caption in English with grounding:<ref> Woman</ref><box>(451,379),(731,806)</box> and<ref> her dog</ref><box>(219,424),(576,896)</box> playing on the beach<|endoftext|>
|
584 |
-
image = tokenizer.draw_bbox_on_latest_picture(response)
|
585 |
-
if image:
|
586 |
-
image.save('2.jpg')
|
587 |
-
else:
|
588 |
-
print("no box")
|
589 |
```
|
|
|
590 |
|
591 |
-
|
592 |
-
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_spotting_caption.jpg" width="500"/>
|
593 |
-
<p>
|
594 |
-
|
595 |
-
|
596 |
-
## FAQ
|
597 |
-
|
598 |
-
如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
|
599 |
-
|
600 |
-
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
|
601 |
-
|
602 |
-
|
603 |
-
## License Agreement
|
604 |
-
|
605 |
-
研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
|
606 |
-
|
607 |
-
Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
|
608 |
-
|
609 |
-
## Contact Us
|
610 |
|
611 |
如果你想给我们的研发团队和产品团队留言,请通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
|
612 |
|
|
|
18 |
<br>
|
19 |
|
20 |
<p align="center">
|
21 |
+
Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>  | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>  | Qwen-VL-Chat-Int4 <a href="https://huggingface.co/Qwen/Qwen-VL-Chat-Int4">🤗</a>
|
22 |
+
<br>
|
23 |
+
<a href="assets/wechat.png">WeChat</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   <a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">Demo</a>  |  <a href="https://arxiv.org/abs/2308.12966">Report</a>
|
24 |
</p>
|
25 |
<br>
|
26 |
|
27 |
+
**Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL 系列模型性能强大,具备多语言对话、多图交错对话等能力,并支持中文开放域定位和细粒度图像识别与理解。
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
**Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
+
目前,我们提供了Qwen-VL和Qwen-VL-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。本仓库为Qwen-VL-Chat仓库。
|
32 |
+
|
33 |
+
We release Qwen-VL and Qwen-VL-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md). This repo is the one for Qwen-VL.
|
34 |
+
<br>
|
35 |
+
|
36 |
+
## 安装要求 (Requirements)
|
37 |
+
|
38 |
+
* python 3.8及以上版本
|
39 |
+
* pytorch 1.12及以上版本,推荐2.0及以上版本
|
40 |
+
* 建议使用CUDA 11.4及以上(GPU用户需考虑此选项)
|
41 |
+
* python 3.8 and above
|
42 |
+
* pytorch 1.12 and above, 2.0 and above are recommended
|
43 |
+
* CUDA 11.4 and above are recommended (this is for GPU users)
|
44 |
+
<br>
|
45 |
+
|
46 |
+
## 快速开始 (Quickstart)
|
47 |
+
|
48 |
+
我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用 Qwen-VL。
|
49 |
+
|
50 |
+
在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
|
51 |
|
52 |
+
Below, we provide simple examples to show how to use Qwen-VL with 🤗 Transformers.
|
53 |
|
54 |
+
Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
|
|
|
|
|
55 |
|
56 |
+
```bash
|
57 |
+
pip install -r requirements.txt
|
58 |
+
```
|
59 |
+
|
60 |
+
接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法,请参考[教程](TUTORIAL.md)。
|
61 |
+
|
62 |
+
Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
|
63 |
+
|
64 |
+
#### 🤗 Transformers
|
65 |
+
|
66 |
+
To use Qwen-VL for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
|
67 |
+
|
68 |
+
```python
|
69 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
70 |
+
from transformers.generation import GenerationConfig
|
71 |
+
import torch
|
72 |
+
torch.manual_seed(1234)
|
73 |
+
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL", trust_remote_code=True)
|
75 |
+
|
76 |
+
# use bf16
|
77 |
+
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="auto", trust_remote_code=True, bf16=True).eval()
|
78 |
+
# use fp16
|
79 |
+
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="auto", trust_remote_code=True, fp16=True).eval()
|
80 |
+
# use cpu only
|
81 |
+
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="cpu", trust_remote_code=True).eval()
|
82 |
+
# use cuda device
|
83 |
+
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL", device_map="cuda", trust_remote_code=True).eval()
|
84 |
+
|
85 |
+
# Specify hyperparameters for generation (No need to do this if you are using transformers>=4.32.0)
|
86 |
+
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL", trust_remote_code=True)
|
87 |
+
|
88 |
+
query = tokenizer.from_list_format([
|
89 |
+
{'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
|
90 |
+
{'text': 'Generate the caption in English with grounding:'},
|
91 |
+
])
|
92 |
+
inputs = tokenizer(query, return_tensors='pt')
|
93 |
+
inputs = inputs.to(model.device)
|
94 |
+
pred = model.generate(**inputs)
|
95 |
+
response = tokenizer.decode(pred.cpu()[0], skip_special_tokens=False)
|
96 |
+
print(response)
|
97 |
+
# <img>https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg</img>Generate the caption in English with grounding:<ref> Woman</ref><box>(451,379),(731,806)</box> and<ref> her dog</ref><box>(219,424),(576,896)</box> playing on the beach<|endoftext|>
|
98 |
+
image = tokenizer.draw_bbox_on_latest_picture(response)
|
99 |
+
if image:
|
100 |
+
image.save('2.jpg')
|
101 |
+
else:
|
102 |
+
print("no box")
|
103 |
+
```
|
104 |
+
|
105 |
+
<p align="center">
|
106 |
+
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_spotting_caption.jpg" width="500"/>
|
107 |
+
<p>
|
108 |
+
<br>
|
109 |
|
110 |
## 评测
|
111 |
|
112 |
我们从两个角度评测了两个模型的能力:
|
|
|
|
|
|
|
|
|
|
|
113 |
|
114 |
+
1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
|
115 |
+
|
116 |
+
- Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
|
117 |
+
- General VQA: 评测模型的通��问答能力,例如判断题、颜色、个数、类目等问答能力;
|
118 |
+
- Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
|
119 |
+
- Referring Expression Compression:评测模型给定物体描述画检测框的能力;
|
120 |
2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
|
121 |
+
|
122 |
+
- 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作,商品比较、图片解题等**尽可能广泛的类别**。
|
123 |
+
- 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
|
124 |
+
- 评测同时包含英文版本和中文版本。
|
125 |
+
|
126 |
评测结果如下:
|
127 |
|
128 |
We evaluated the model's ability from two perspectives:
|
129 |
+
|
130 |
1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
|
131 |
+
|
132 |
- Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
|
133 |
- General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
|
134 |
- Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
|
135 |
- Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
|
|
|
136 |
2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
|
137 |
+
|
138 |
- The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
|
139 |
- In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
|
140 |
- The benchmark includes both English and Chinese versions.
|
|
|
147 |
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
|
148 |
<p>
|
149 |
|
150 |
+
### 零样本图像描述 & 通用视觉问答 (Zero-shot Captioning & General VQA)
|
151 |
+
|
152 |
<table>
|
153 |
<thead>
|
154 |
<tr>
|
|
|
305 |
|
306 |
- 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
|
307 |
- 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
|
|
|
308 |
- For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
|
309 |
- For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
|
310 |
|
311 |
+
### 文本导向的视觉问答 (Text-oriented VQA)
|
312 |
|
313 |
<table>
|
314 |
<thead>
|
|
|
378 |
|
379 |
- 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
|
380 |
- 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
|
|
|
381 |
- In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
|
382 |
- Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
|
383 |
|
384 |
+
### 细粒度视觉定位 (Referring Expression Comprehension)
|
385 |
+
|
386 |
<table>
|
387 |
<thead>
|
388 |
<tr>
|
|
|
552 |
|
553 |
We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
|
554 |
|
555 |
+
### 闲聊能力测评 (Chat Evaluation)
|
556 |
|
557 |
TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
|
558 |
|
559 |
TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
|
560 |
|
561 |
+
#### 英语 (English)
|
562 |
|
563 |
| Model | Score |
|
564 |
|---------------|-------|
|
|
|
570 |
| LLaVA | 602.7 |
|
571 |
| Qwen-VL-Chat | 645.2 |
|
572 |
|
573 |
+
#### 中文 (Chinese)
|
574 |
|
575 |
| Model | Score |
|
576 |
|---------------|-------|
|
|
|
580 |
Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
|
581 |
|
582 |
Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
|
583 |
+
<br>
|
584 |
|
585 |
+
## 常见问题 (FAQ)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
586 |
|
587 |
+
如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
|
588 |
|
589 |
+
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
|
590 |
+
<br>
|
591 |
|
592 |
+
## 使用协议 (License Agreement)
|
593 |
|
594 |
+
研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
|
595 |
|
596 |
+
Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
|
597 |
+
<br>
|
|
|
|
|
|
|
598 |
|
599 |
+
## 引用 (Citation)[](https://)
|
600 |
|
601 |
+
如果你觉得我们的论文和代码对你的研究有帮助,请考虑:star: 和引用 :pencil: :)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
602 |
|
603 |
+
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
|
|
|
604 |
|
605 |
+
```BibTeX
|
606 |
+
@article{Qwen-VL,
|
607 |
+
title={Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities},
|
608 |
+
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
|
609 |
+
journal={arXiv preprint arXiv:2308.12966},
|
610 |
+
year={2023}
|
611 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
612 |
```
|
613 |
+
<br>
|
614 |
|
615 |
+
## 联系我们 (Contact Us)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
616 |
|
617 |
如果你想给我们的研发团队和产品团队留言,请通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
|
618 |
|