File size: 3,600 Bytes
3ac1ac3 83c6537 3ac1ac3 4500f61 3ac1ac3 4500f61 83c6537 2e8288f 83c6537 2e8288f 2411c4c 7871fd4 2411c4c 7871fd4 3ac1ac3 83c6537 2411c4c 3ac1ac3 fc13a32 6096353 59ce045 6096353 59ce045 6096353 59ce045 6096353 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
dataset_info:
- config_name: tqa
features:
- name: id
dtype: string
- name: text
dtype: string
- name: oracle_answer
dtype: string
- name: oracle_option
dtype: string
- name: oracle_full_answer
dtype: string
splits:
- name: test
num_bytes: 4723238
num_examples: 4635
download_size: 804261
dataset_size: 4723238
- config_name: vqa
features:
- name: id
dtype: string
- name: text
dtype: string
- name: image
dtype: image
- name: oracle_answer
dtype: string
- name: oracle_option
dtype: string
- name: oracle_full_answer
dtype: string
splits:
- name: test
num_bytes: 733091578.0
num_examples: 4635
download_size: 712137895
dataset_size: 733091578.0
- config_name: vtqa
features:
- name: id
dtype: string
- name: text
dtype: string
- name: image
dtype: image
- name: oracle_answer
dtype: string
- name: oracle_option
dtype: string
- name: oracle_full_answer
dtype: string
splits:
- name: test
num_bytes: 736109315.0
num_examples: 4635
download_size: 712879771
dataset_size: 736109315.0
configs:
- config_name: tqa
data_files:
- split: test
path: tqa/test-*
- config_name: vqa
data_files:
- split: test
path: vqa/test-*
- config_name: vtqa
data_files:
- split: test
path: vtqa/test-*
---
## 🤔 About SpatialEval
SpatialEval is a comprehensive benchmark for evaluating spatial intelligence in LLMs and VLMs across four key dimensions:
- Spatial relationships
- Positional understanding
- Object counting
- Navigation
### Benchmark Tasks
1. **Spatial-Map**: Understanding spatial relationships between objects in map-based scenarios
2. **Maze-Nav**: Testing navigation through complex environments
3. **Spatial-Grid**: Evaluating spatial reasoning within structured environments
4. **Spatial-Real**: Assessing real-world spatial understanding
Each task supports three input modalities:
- Text-only (TQA)
- Vision-only (VQA)
- Vision-Text (VTQA)
![spatialeval_task.png](https://cdn-uploads.huggingface.co/production/uploads/651651f5d93a51ceda3021c3/kpjld6-HCg5LXhO9Ju6-Q.png)
## 📌 Quick Links
Project Page: https://spatialeval.github.io/
Paper: https://arxiv.org/pdf/2406.14852
Code: https://github.com/jiayuww/SpatialEval
Talk: https://neurips.cc/virtual/2024/poster/94371
## 🚀 Quick Start
### 📍 Load Dataset
SpatialEval provides three input modalities—TQA (Text-only), VQA (Vision-only), and VTQA (Vision-text)—across four tasks: Spatial-Map, Maze-Nav, Spatial-Grid, and Spatial-Real. Each modality and task is easily accessible via Hugging Face. Ensure you have installed the [packages](https://huggingface.co/docs/datasets/en/quickstart):
```python
from datasets import load_dataset
tqa = load_dataset("MilaWang/SpatialEval", "tqa", split="test")
vqa = load_dataset("MilaWang/SpatialEval", "vqa", split="test")
vtqa = load_dataset("MilaWang/SpatialEval", "vtqa", split="test")
```
## ⭐ Citation
If you find our work helpful, please consider citing our paper 😊
```
@inproceedings{wang2024spatial,
title={Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models},
author={Wang, Jiayu and Ming, Yifei and Shi, Zhenmei and Vineet, Vibhav and Wang, Xin and Li, Yixuan and Joshi, Neel},
booktitle={The Thirty-Eighth Annual Conference on Neural Information Processing Systems},
year={2024}
}
```
## 💬 Questions
Have questions? We're here to help!
- Open an issue in the github repository
- Contact us through the channels listed on our project page
|