SpatialEval / README.md
MilaWang's picture
Update README.md
59ce045 verified
metadata
dataset_info:
  - config_name: tqa
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: oracle_answer
        dtype: string
      - name: oracle_option
        dtype: string
      - name: oracle_full_answer
        dtype: string
    splits:
      - name: test
        num_bytes: 4723238
        num_examples: 4635
    download_size: 804261
    dataset_size: 4723238
  - config_name: vqa
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: image
        dtype: image
      - name: oracle_answer
        dtype: string
      - name: oracle_option
        dtype: string
      - name: oracle_full_answer
        dtype: string
    splits:
      - name: test
        num_bytes: 733091578
        num_examples: 4635
    download_size: 712137895
    dataset_size: 733091578
  - config_name: vtqa
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: image
        dtype: image
      - name: oracle_answer
        dtype: string
      - name: oracle_option
        dtype: string
      - name: oracle_full_answer
        dtype: string
    splits:
      - name: test
        num_bytes: 736109315
        num_examples: 4635
    download_size: 712879771
    dataset_size: 736109315
configs:
  - config_name: tqa
    data_files:
      - split: test
        path: tqa/test-*
  - config_name: vqa
    data_files:
      - split: test
        path: vqa/test-*
  - config_name: vtqa
    data_files:
      - split: test
        path: vtqa/test-*

🤔 About SpatialEval

SpatialEval is a comprehensive benchmark for evaluating spatial intelligence in LLMs and VLMs across four key dimensions:

  • Spatial relationships
  • Positional understanding
  • Object counting
  • Navigation

Benchmark Tasks

  1. Spatial-Map: Understanding spatial relationships between objects in map-based scenarios
  2. Maze-Nav: Testing navigation through complex environments
  3. Spatial-Grid: Evaluating spatial reasoning within structured environments
  4. Spatial-Real: Assessing real-world spatial understanding

Each task supports three input modalities:

  • Text-only (TQA)
  • Vision-only (VQA)
  • Vision-Text (VTQA)

spatialeval_task.png

📌 Quick Links

Project Page: https://spatialeval.github.io/

Paper: https://arxiv.org/pdf/2406.14852

Code: https://github.com/jiayuww/SpatialEval

Talk: https://neurips.cc/virtual/2024/poster/94371

🚀 Quick Start

📍 Load Dataset

SpatialEval provides three input modalities—TQA (Text-only), VQA (Vision-only), and VTQA (Vision-text)—across four tasks: Spatial-Map, Maze-Nav, Spatial-Grid, and Spatial-Real. Each modality and task is easily accessible via Hugging Face. Ensure you have installed the packages:

from datasets import load_dataset

tqa = load_dataset("MilaWang/SpatialEval", "tqa", split="test")
vqa = load_dataset("MilaWang/SpatialEval", "vqa", split="test")
vtqa = load_dataset("MilaWang/SpatialEval", "vtqa", split="test")

⭐ Citation

If you find our work helpful, please consider citing our paper 😊

@inproceedings{wang2024spatial,
title={Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models},
author={Wang, Jiayu and Ming, Yifei and Shi, Zhenmei and Vineet, Vibhav and Wang, Xin and Li, Yixuan and Joshi, Neel},
booktitle={The Thirty-Eighth Annual Conference on Neural Information Processing Systems},
year={2024}
}

💬 Questions

Have questions? We're here to help!

  • Open an issue in the github repository
  • Contact us through the channels listed on our project page