ryokamoi commited on
Commit
6365546
1 Parent(s): 559d47d

Updated README.md - added VLMEvalKit

Browse files
Files changed (1) hide show
  1. README.md +33 -13
README.md CHANGED
@@ -98,14 +98,17 @@ configs:
98
  ---
99
  # VisOnlyQA
100
 
101
- VisOnlyQA is a dataset proposed in the paper "[VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://arxiv.org/abs/2412.00947)".
102
 
103
  VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
104
 
105
  * Datasets:
106
- * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
107
- * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
108
- * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
 
 
 
109
  * Code: [https://github.com/psunlpgroup/VisOnlyQA](https://github.com/psunlpgroup/VisOnlyQA)
110
 
111
  <p align="center">
@@ -123,7 +126,32 @@ VisOnlyQA is designed to evaluate the visual perception capability of large visi
123
 
124
  ## Dataset
125
 
126
- The dataset is provided in Hugging Face Dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
 
128
  * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
129
  * 500 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv)
@@ -134,14 +162,6 @@ The dataset is provided in Hugging Face Dataset.
134
 
135
  [dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data.
136
 
137
- ### Examples
138
-
139
- <p align="center">
140
- <img src="readme_figures/examples.png" width="800">
141
- </p>
142
-
143
- ### Usage
144
-
145
  ```python
146
  from datasets import load_dataset
147
 
 
98
  ---
99
  # VisOnlyQA
100
 
101
+ This repository contains the code and data for the paper "[VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://arxiv.org/abs/2412.00947)".
102
 
103
  VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
104
 
105
  * Datasets:
106
+ * VisOnlyQA is available at [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) 🔥🔥🔥
107
+ * VisOnlyQA in VLMEvalKit is different from the original one. Refer to [this section](#vlmevalkit) for details.
108
+ * Hugging Face
109
+ * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
110
+ * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
111
+ * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
112
  * Code: [https://github.com/psunlpgroup/VisOnlyQA](https://github.com/psunlpgroup/VisOnlyQA)
113
 
114
  <p align="center">
 
126
 
127
  ## Dataset
128
 
129
+ VisOnlyQA is provided in two formats: VLMEvalKit and Hugging Face Dataset. You can use either of them to evaluate your models and report the results in your papers. However, when you report the results, please explicitly mention which version of the dataset you used because the two versions are different.
130
+
131
+ ### Examples
132
+
133
+ <p align="center">
134
+ <img src="readme_figures/examples.png" width="800">
135
+ </p>
136
+
137
+ ### VLMEvalKit
138
+
139
+ [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) provides one-command evaluation. However, VLMEvalKit is not designed to reproduce the results in the paper. We welcome using it to report the results on VisOnlyQA in your papers, but please explicitly mention that you used VLMEvalKit.
140
+
141
+ The major differences are:
142
+
143
+ * VisOnlyQA on VLMEvalKit does not include the `chemistry__shape_multi` split
144
+ * VLMEvalKit uses different prompts and postprocessing.
145
+
146
+ Refer to [this document](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) for the installation and setup of VLMEvalKit. After setting up the environment, you can evaluate any supported models on VisOnlyQA with the following command (this example is for InternVL2-26B).
147
+
148
+ ```bash
149
+ python run.py --data VisOnlyQA-VLMEvalKit --model InternVL2-26B
150
+ ```
151
+
152
+ ### Hugging Face Dataset
153
+
154
+ The original VisOnlyQA dataset is provided in Hugging Face Dataset. If you want to reproduce the results in our paper, please use this version and code in the GitHub repository.
155
 
156
  * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
157
  * 500 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv)
 
162
 
163
  [dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data.
164
 
 
 
 
 
 
 
 
 
165
  ```python
166
  from datasets import load_dataset
167