Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Chinese
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
tianyu-z commited on
Commit
89f8bdc
1 Parent(s): ae9fb0a
Files changed (1) hide show
  1. README.md +55 -15
README.md CHANGED
@@ -65,12 +65,14 @@ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in c
65
  | GPT-4 Turbo | - | *78.74* | *88.54* | *45.15* | *65.72* | 0.2 | 8.42 | 0.0 | *8.58* |
66
  | GPT-4V | - | 52.04 | 65.36 | 25.83 | 44.63 | - | - | - | - |
67
  | GPT-4o | - | **91.55** | **96.44** | **73.2** | **86.17** | **14.87** | **39.05** | **2.2** | **22.72** |
 
68
  | Gemini 1.5 Pro | - | 62.73 | 77.71 | 28.07 | 51.9 | 1.1 | 11.1 | 0.7 | 11.82 |
69
  | Qwen-VL-Max | - | 76.8 | 85.71 | 41.65 | 61.18 | *6.34* | *13.45* | *0.89* | 5.4 |
70
  | Reka Core | - | 66.46 | 84.23 | 6.71 | 25.84 | 0.0 | 3.43 | 0.0 | 3.35 |
71
  | Cambrian-1 | 34B | 79.69 | 89.27 | *27.20* | 50.04 | 0.03 | 1.27 | 0.00 | 1.37 |
72
  | Cambrian-1 | 13B | 49.35 | 65.11 | 8.37 | 29.12 | - | - | - | - |
73
  | Cambrian-1 | 8B | 71.13 | 83.68 | 13.78 | 35.78 | - | - | - | - |
 
74
  | CogVLM2 | 19B | *83.25* | *89.75* | **37.98** | **59.99** | 9.15 | 17.12 | 0.08 | 3.67 |
75
  | CogVLM2-Chinese | 19B | 79.90 | 87.42 | 25.13 | 48.76 | **33.24** | **57.57** | **1.34** | **17.35** |
76
  | DeepSeek-VL | 1.3B | 23.04 | 46.84 | 0.16 | 11.89 | 0.0 | 6.56 | 0.0 | 6.46 |
@@ -81,9 +83,11 @@ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in c
81
  | InternLM-XComposer2-VL | 7B | 46.64 | 70.99 | 0.7 | 12.51 | 0.27 | 12.32 | 0.07 | 8.97 |
82
  | InternLM-XComposer2-VL-4KHD | 7B | 5.32 | 22.14 | 0.21 | 9.52 | 0.46 | 12.31 | 0.05 | 7.67 |
83
  | InternLM-XComposer2.5-VL | 7B | 41.35 | 63.04 | 0.93 | 13.82 | 0.46 | 12.97 | 0.11 | 10.95 |
84
- | InternVL-V1.5 | 25.5B | 14.65 | 51.42 | 1.99 | 16.73 | 4.78 | 26.43 | 0.03 | 8.46 |
85
  | InternVL-V2 | 26B | 74.51 | 86.74 | 6.18 | 24.52 | 9.02 | 32.50 | 0.05 | 9.49 |
86
  | InternVL-V2 | 40B | **84.67** | **92.64** | 13.10 | 33.64 | 22.09 | 47.62 | 0.48 | 12.57 |
 
 
87
  | MiniCPM-V2.5 | 8B | 31.81 | 53.24 | 1.41 | 11.94 | 4.1 | 18.03 | 0.09 | 7.39 |
88
  | Monkey | 7B | 50.66 | 67.6 | 1.96 | 14.02 | 0.62 | 8.34 | 0.12 | 6.36 |
89
  | Qwen-VL | 7B | 49.71 | 69.94 | 2.0 | 15.04 | 0.04 | 1.5 | 0.01 | 1.17 |
@@ -92,39 +96,43 @@ EM means `"Exact Match"` and Jaccard means `"Jaccard Similarity"`. The best in c
92
 
93
  # Model Evaluation
94
 
95
- ## Method 1 (recommended): use the evaluation script
96
- ```bash
97
- git clone https://github.com/tianyu-z/VCR.git
98
- ```
99
  ### Open-source evaluation
100
  We support open-source model_id:
101
  ```python
102
  ["openbmb/MiniCPM-Llama3-V-2_5",
103
  "OpenGVLab/InternVL-Chat-V1-5",
104
  "internlm/internlm-xcomposer2-vl-7b",
 
 
105
  "HuggingFaceM4/idefics2-8b",
106
  "Qwen/Qwen-VL-Chat",
107
  "THUDM/cogvlm2-llama3-chinese-chat-19B",
108
  "THUDM/cogvlm2-llama3-chat-19B",
109
- "echo840/Monkey-Chat",]
 
 
 
 
 
 
 
 
 
110
  ```
111
  For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline. Examples of the inference logic are in `src/evaluation/inference.py`
112
 
113
  ```bash
114
  pip install -r requirements.txt
115
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
116
- # Inference from the VLMs and save the results to {model_id}_{difficulty}_{language}.json
117
  cd src/evaluation
118
- python3 inference.py --dataset_handler "vcr-org/VCR-wiki-en-easy-test" --model_id "HuggingFaceM4/idefics2-8b" --device "cuda" --dtype "bf16" --save_interval 50 --resume True
119
-
120
  # Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
121
- python3 evaluation_metrics.py --model_id HuggingFaceM4/idefics2-8b --output_path . --json_filename "HuggingFaceM4_idefics2-8b_en_easy.json" --dataset_handler "vcr-org/VCR-wiki-en-easy-test"
122
-
123
- # To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
124
- python3 gather_results.py --jsons_path .
125
  ```
 
 
126
 
127
- ### Close-source evaluation
128
  We provide the evaluation script for the close-source models in `src/evaluation/closed_source_eval.py`.
129
 
130
  You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
@@ -146,14 +154,46 @@ python3 evaluation_metrics.py --model_id gpt4o --output_path . --json_filename "
146
  # To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
147
  python3 gather_results.py --jsons_path .
148
  ```
 
 
 
 
 
 
 
 
 
149
 
150
- ## Method 2: use lmms-eval framework
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  You may need to incorporate the inference method of your model if the lmms-eval framework does not support it. For details, please refer to [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/docs/model_guide.md)
152
  ```bash
153
  pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
154
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
155
  python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
156
  ```
 
 
157
  `lmms-eval` supports the following VCR `--tasks` settings:
158
 
159
  * English
 
65
  | GPT-4 Turbo | - | *78.74* | *88.54* | *45.15* | *65.72* | 0.2 | 8.42 | 0.0 | *8.58* |
66
  | GPT-4V | - | 52.04 | 65.36 | 25.83 | 44.63 | - | - | - | - |
67
  | GPT-4o | - | **91.55** | **96.44** | **73.2** | **86.17** | **14.87** | **39.05** | **2.2** | **22.72** |
68
+ | GPT-4o-mini | - | 83.60 | 87.77 | 54.04 | 73.09 | 1.10 | 5.03 | 0 | 2.02 |
69
  | Gemini 1.5 Pro | - | 62.73 | 77.71 | 28.07 | 51.9 | 1.1 | 11.1 | 0.7 | 11.82 |
70
  | Qwen-VL-Max | - | 76.8 | 85.71 | 41.65 | 61.18 | *6.34* | *13.45* | *0.89* | 5.4 |
71
  | Reka Core | - | 66.46 | 84.23 | 6.71 | 25.84 | 0.0 | 3.43 | 0.0 | 3.35 |
72
  | Cambrian-1 | 34B | 79.69 | 89.27 | *27.20* | 50.04 | 0.03 | 1.27 | 0.00 | 1.37 |
73
  | Cambrian-1 | 13B | 49.35 | 65.11 | 8.37 | 29.12 | - | - | - | - |
74
  | Cambrian-1 | 8B | 71.13 | 83.68 | 13.78 | 35.78 | - | - | - | - |
75
+ | CogVLM | 17B | 73.88 | 86.24 | 34.58 | 57.17 | - | - | - | - |
76
  | CogVLM2 | 19B | *83.25* | *89.75* | **37.98** | **59.99** | 9.15 | 17.12 | 0.08 | 3.67 |
77
  | CogVLM2-Chinese | 19B | 79.90 | 87.42 | 25.13 | 48.76 | **33.24** | **57.57** | **1.34** | **17.35** |
78
  | DeepSeek-VL | 1.3B | 23.04 | 46.84 | 0.16 | 11.89 | 0.0 | 6.56 | 0.0 | 6.46 |
 
83
  | InternLM-XComposer2-VL | 7B | 46.64 | 70.99 | 0.7 | 12.51 | 0.27 | 12.32 | 0.07 | 8.97 |
84
  | InternLM-XComposer2-VL-4KHD | 7B | 5.32 | 22.14 | 0.21 | 9.52 | 0.46 | 12.31 | 0.05 | 7.67 |
85
  | InternLM-XComposer2.5-VL | 7B | 41.35 | 63.04 | 0.93 | 13.82 | 0.46 | 12.97 | 0.11 | 10.95 |
86
+ | InternVL-V1.5 | 26B | 14.65 | 51.42 | 1.99 | 16.73 | 4.78 | 26.43 | 0.03 | 8.46 |
87
  | InternVL-V2 | 26B | 74.51 | 86.74 | 6.18 | 24.52 | 9.02 | 32.50 | 0.05 | 9.49 |
88
  | InternVL-V2 | 40B | **84.67** | **92.64** | 13.10 | 33.64 | 22.09 | 47.62 | 0.48 | 12.57 |
89
+ | InternVL-V2 | 76B | 83.20 | 91.26 | 18.45 | 41.16 | 20.58 | 44.59 | 0.56 | 15.31 |
90
+ | InternVL-V2-Pro | - | 77.41 | 86.59 | 12.94 | 35.01 | 19.58 | 43.98 | 0.84 | 13.97 |
91
  | MiniCPM-V2.5 | 8B | 31.81 | 53.24 | 1.41 | 11.94 | 4.1 | 18.03 | 0.09 | 7.39 |
92
  | Monkey | 7B | 50.66 | 67.6 | 1.96 | 14.02 | 0.62 | 8.34 | 0.12 | 6.36 |
93
  | Qwen-VL | 7B | 49.71 | 69.94 | 2.0 | 15.04 | 0.04 | 1.5 | 0.01 | 1.17 |
 
96
 
97
  # Model Evaluation
98
 
99
+ ## Method 1: use the evaluation script
 
 
 
100
  ### Open-source evaluation
101
  We support open-source model_id:
102
  ```python
103
  ["openbmb/MiniCPM-Llama3-V-2_5",
104
  "OpenGVLab/InternVL-Chat-V1-5",
105
  "internlm/internlm-xcomposer2-vl-7b",
106
+ "internlm/internlm-xcomposer2-4khd-7b",
107
+ "internlm/internlm-xcomposer2d5-7b",
108
  "HuggingFaceM4/idefics2-8b",
109
  "Qwen/Qwen-VL-Chat",
110
  "THUDM/cogvlm2-llama3-chinese-chat-19B",
111
  "THUDM/cogvlm2-llama3-chat-19B",
112
+ "THUDM/cogvlm-chat-hf",
113
+ "echo840/Monkey-Chat",
114
+ "THUDM/glm-4v-9b",
115
+ "nyu-visionx/cambrian-phi3-3b",
116
+ "nyu-visionx/cambrian-8b",
117
+ "nyu-visionx/cambrian-13b",
118
+ "nyu-visionx/cambrian-34b",
119
+ "OpenGVLab/InternVL2-26B",
120
+ "OpenGVLab/InternVL2-40B"
121
+ "OpenGVLab/InternVL2-Llama3-76B",]
122
  ```
123
  For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline. Examples of the inference logic are in `src/evaluation/inference.py`
124
 
125
  ```bash
126
  pip install -r requirements.txt
127
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
 
128
  cd src/evaluation
 
 
129
  # Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
130
+ python3 evaluation_pipeline.py --dataset_handler "vcr-org/VCR-wiki-en-easy-test" --model_id HuggingFaceM4/idefics2-8b --device "cuda" --output_path . --bootstrap --end_index 5000
 
 
 
131
  ```
132
+ For large models like "OpenGVLab/InternVL2-Llama3-76B", you may have to use multi-GPU to do the evaluation. You can specify --device to None to use all GPUs available.
133
+
134
 
135
+ ### Close-source evaluation (using API)
136
  We provide the evaluation script for the close-source models in `src/evaluation/closed_source_eval.py`.
137
 
138
  You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
 
154
  # To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
155
  python3 gather_results.py --jsons_path .
156
  ```
157
+ ## Method 2: use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) framework
158
+ You may need to incorporate the inference method of your model if the VLMEvalKit framework does not support it. For details, please refer to [here](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Development.md)
159
+ ```bash
160
+ git clone https://github.com/open-compass/VLMEvalKit.git
161
+ cd VLMEvalKit
162
+ # We use HuggingFaceM4/idefics2-8b and VCR_EN_EASY_ALL as an example
163
+ python run.py --data VCR_EN_EASY_ALL --model idefics2_8b --verbose
164
+ ```
165
+ You may find the supported model list [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/config.py).
166
 
167
+ `VLMEvalKit` supports the following VCR `--data` settings:
168
+
169
+ * English
170
+ * Easy
171
+ * `VCR_EN_EASY_ALL` (full test set, 5000 instances)
172
+ * `VCR_EN_EASY_500` (first 500 instances in the VCR_EN_EASY_ALL setting)
173
+ * `VCR_EN_EASY_100` (first 100 instances in the VCR_EN_EASY_ALL setting)
174
+ * Hard
175
+ * `VCR_EN_HARD_ALL` (full test set, 5000 instances)
176
+ * `VCR_EN_HARD_500` (first 500 instances in the VCR_EN_HARD_ALL setting)
177
+ * `VCR_EN_HARD_100` (first 100 instances in the VCR_EN_HARD_ALL setting)
178
+ * Chinese
179
+ * Easy
180
+ * `VCR_ZH_EASY_ALL` (full test set, 5000 instances)
181
+ * `VCR_ZH_EASY_500` (first 500 instances in the VCR_ZH_EASY_ALL setting)
182
+ * `VCR_ZH_EASY_100` (first 100 instances in the VCR_ZH_EASY_ALL setting)
183
+ * Hard
184
+ * `VCR_ZH_HARD_ALL` (full test set, 5000 instances)
185
+ * `VCR_ZH_HARD_500` (first 500 instances in the VCR_ZH_HARD_ALL setting)
186
+ * `VCR_ZH_HARD_100` (first 100 instances in the VCR_ZH_HARD_ALL setting)
187
+
188
+ ## Method 3: use lmms-eval framework
189
  You may need to incorporate the inference method of your model if the lmms-eval framework does not support it. For details, please refer to [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/docs/model_guide.md)
190
  ```bash
191
  pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
192
  # We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
193
  python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
194
  ```
195
+ You may find the supported model list [here](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/models).
196
+
197
  `lmms-eval` supports the following VCR `--tasks` settings:
198
 
199
  * English