Geoperception / README.md
jrzhang's picture
Update README.md
f397d13 verified
|
raw
history blame
1.93 kB
---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- visual-question-answering
pretty_name: Geoperception
tags:
- multi-modal-qa
- math-qa
- figure-qa
- geometry-qa
- math-word-problem
- vqa
- geometry-reasoning
- numeric-common-sense
- scientific-reasoning
- logical-reasoning
- geometry-diagram
- synthetic-scene
- scientific-figure
- function-plot
- abstract-scene
- mathematics
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: predicate
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 294203058.193
num_examples: 11657
download_size: 93419701
dataset_size: 294203058.193
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Geoperception
A Benchmark for Low-level Geometric Perception
## Dataset Details
### Dataset Description
Geoperception is a benchmark focused specifically on accessing model's low-level visual perception ability in 2D geometry.
It is sourced from the Geometry-3K corpus, which offers precise logical forms for geometric diagrams, compiled from popular high-school textbooks.
### Dataset Sources
- **Repository:** [https://github.com/euclid-multimodal/Euclid-Model]
- **Paper:** [More Information Needed]
- **Demo:** [More Information Needed]
## Uses
Evaluation of multimodal LLM's ability of low-level visual perception in 2D geometry domain.
## Dataset Structure
### Fields
- **id** identification of each data instance
- **question** question
- **answer** answer
- **predicate** question type, including
- **PointLiesOnLine**
- **LineComparison**
- **PointLiesOnCircle**
- **AngleClassification**
- **Parallel**
- **Perpendicular**
- **Equal**
- **image** image
## Citation
**BibTeX:**
[More Information Needed]