Datasets:
Added paper link
Browse files
README.md
CHANGED
@@ -98,7 +98,7 @@ configs:
|
|
98 |
---
|
99 |
# VisOnlyQA
|
100 |
|
101 |
-
|
102 |
|
103 |
VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
|
104 |
|
@@ -117,6 +117,7 @@ VisOnlyQA is designed to evaluate the visual perception capability of large visi
|
|
117 |
title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
|
118 |
author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
|
119 |
year={2024},
|
|
|
120 |
}
|
121 |
```
|
122 |
|
|
|
98 |
---
|
99 |
# VisOnlyQA
|
100 |
|
101 |
+
VisOnlyQA is a dataset proposed in the paper "[VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://arxiv.org/abs/2412.00947)".
|
102 |
|
103 |
VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
|
104 |
|
|
|
117 |
title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
|
118 |
author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
|
119 |
year={2024},
|
120 |
+
journal={arXiv preprint arXiv:2412.00947}
|
121 |
}
|
122 |
```
|
123 |
|