Geoperception / README.md
jrzhang's picture
Update README.md
9831adc verified
|
raw
history blame
3.59 kB
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 10K<n<100K
task_categories:
  - question-answering
  - visual-question-answering
pretty_name: Geoperception
tags:
  - multi-modal-qa
  - math-qa
  - figure-qa
  - geometry-qa
  - math-word-problem
  - vqa
  - geometry-reasoning
  - numeric-common-sense
  - scientific-reasoning
  - logical-reasoning
  - geometry-diagram
  - synthetic-scene
  - scientific-figure
  - function-plot
  - abstract-scene
  - mathematics
dataset_info:
  features:
    - name: id
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: predicate
      dtype: string
    - name: image
      dtype: image
  splits:
    - name: train
      num_bytes: 294203058.193
      num_examples: 11657
  download_size: 93419701
  dataset_size: 294203058.193
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for Geoperception

A Benchmark for Low-level Geometric Perception

Dataset Details

Dataset Description

Geoperception is a benchmark focused specifically on accessing model's low-level visual perception ability in 2D geometry.

It is sourced from the Geometry-3K corpus, which offers precise logical forms for geometric diagrams, compiled from popular high-school textbooks.

Dataset Sources

Uses

Evaluation of multimodal LLM's ability of low-level visual perception in 2D geometry domain.

Dataset Structure

Fields

  • id identification of each data instance
  • question question
  • answer answer
  • predicate question type, including
    • PointLiesOnLine
    • LineComparison
    • PointLiesOnCircle
    • AngleClassification
    • Parallel
    • Perpendicular
    • Equal
  • image image

Evaluation Result

Model POL POC ALC LHC PEP PRA EQL Overall
Random Baseline 1.35 2.63 59.92 51.36 0.23 0.00 0.02 16.50
Open Source
Molmo-7B-D 11.96 35.73 56.77 16.79 1.06 0.00 0.81 17.59
Llama-3.2-11B 16.22 37.12 59.46 52.08 8.38 22.41 49.86 35.08
Qwen2-VL-7B 21.89 41.60 46.60 63.27 26.41 30.19 54.37 40.62
Cambrian-1-8B 15.14 28.68 58.05 61.48 22.96 30.74 31.04 35.44
Pixtral-12B 24.63 53.21 47.33 51.43 21.96 36.64 58.41 41.95
Closed Source
GPT-4o-mini 9.80 61.19 48.84 69.51 9.80 4.25 44.74 35.45
GPT-4o 16.43 71.49 55.63 74.39 24.80 60.30 44.69 49.68
Claude 3.5 Sonnet 25.44 68.34 42.95 70.73 21.41 63.92 66.34 51.30
Gemini-1.5-Flash 29.30 67.75 49.89 76.69 29.98 63.44 66.28 54.76
Gemini-1.5-Pro 24.42 69.80 57.96 79.05 38.81 76.65 52.15 56.98

Citation

BibTeX:

[More Information Needed]