MilaWang commited on
Commit
6096353
·
verified ·
1 Parent(s): fc13a32

Update README.md

Browse files

init spatialeval description

Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -73,3 +73,60 @@ configs:
73
  path: vtqa/test-*
74
  ---
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  path: vtqa/test-*
74
  ---
75
 
76
+ ## 🤔 About SpatialEval
77
+ SpatialEval is a comprehensive benchmark for evaluating spatial intelligence in LLMs and VLMs across four key dimensions:
78
+ - Spatial relationships
79
+ - Positional understanding
80
+ - Object counting
81
+ - Navigation
82
+
83
+ ### Benchmark Tasks
84
+ 1. **Spatial-Map**: Understanding spatial relationships between objects in map-based scenarios
85
+ 2. **Maze-Nav**: Testing navigation through complex environments
86
+ 3. **Spatial-Grid**: Evaluating spatial reasoning within structured environments
87
+ 4. **Spatial-Real**: Assessing real-world spatial understanding
88
+
89
+ Each task supports three input modalities:
90
+ - Text-only (TQA)
91
+ - Vision-only (VQA)
92
+ - Vision-Text (VTQA)
93
+
94
+ ![spatialeval_task.png](https://cdn-uploads.huggingface.co/production/uploads/651651f5d93a51ceda3021c3/kpjld6-HCg5LXhO9Ju6-Q.png)
95
+
96
+ ## 📌 Quick Links
97
+ Project Page: https://spatialeval.github.io/
98
+ Paper: https://arxiv.org/pdf/2406.14852
99
+ Code: https://github.com/jiayuww/SpatialEval
100
+ Talk: https://neurips.cc/virtual/2024/poster/94371
101
+
102
+ ## 🚀 Quick Start
103
+
104
+ ### 📍 Load Dataset
105
+
106
+ SpatialEval provides three input modalities—TQA (Text-only), VQA (Vision-only), and VTQA (Vision-text)—across four tasks: Spatial-Map, Maze-Nav, Spatial-Grid, and Spatial-Real. Each modality and task is easily accessible via Hugging Face. Ensure you have installed the [packages](https://huggingface.co/docs/datasets/en/quickstart):
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ tqa = load_dataset("MilaWang/SpatialEval", "tqa", split="test")
112
+ vqa = load_dataset("MilaWang/SpatialEval", "vqa", split="test")
113
+ vtqa = load_dataset("MilaWang/SpatialEval", "vtqa", split="test")
114
+ ```
115
+
116
+ ## ⭐ Citation
117
+
118
+ If you find our work helpful, please consider citing our paper 😊
119
+
120
+ ```
121
+ @inproceedings{wang2024spatial,
122
+ title={Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models},
123
+ author={Wang, Jiayu and Ming, Yifei and Shi, Zhenmei and Vineet, Vibhav and Wang, Xin and Li, Yixuan and Joshi, Neel},
124
+ booktitle={The Thirty-Eighth Annual Conference on Neural Information Processing Systems},
125
+ year={2024}
126
+ }
127
+ ```
128
+
129
+ ## 💬 Questions
130
+ Have questions? We're here to help!
131
+ - Open an issue in the github repository
132
+ - Contact us through the channels listed on our project page