The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
mp4
unknown | __key__
string | __url__
string |
|---|---|---|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQY+2VdtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_clean_seq148_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQXZsnZtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_meal_seq142_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQZKz9xtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_multiuser_meal_seq140_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQZlToZtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_multiuser_clean_seq111_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQZ69QZtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_meal_seq143_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQYemW1tZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_multiuser_cook_seq116_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQaFNSBtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_multiuser_cook_seq115_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQejmJ5tZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_multiuser_cook_seq141_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQMYGMNtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Lite_release_recognition_BookDeepLearning_seq031_61283_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQTuv/RtZGF0AAACoQYF//+d3EXpvebZSLeWLNgg2SPu73g(...TRUNCATED)
|
adt/ADT_Apartment_release_multiskeleton_party_seq123_M1292_preview_rgb
|
hf://datasets/nyu-visionx/VSI-590K@346fbd4e41dec974bf24894d0541a49327ee6669/adt.tar.gz
|
VSI-590K
Website | Paper | GitHub | Models
Authors: Shusheng Yang*, Jihan Yang*, Pinzhi Huang†, Ellis Brown†, et al.
VSI-590K is a large-scale spatially-focused instruction-tuning dataset focusing on spatial reasoning. The dataset is curated from diverse sources and carefully annotated.
Quick Start
import json
# Load from JSONL file
with open('vsi_590k.jsonl', 'r') as f:
for line in f:
sample = json.loads(line.strip())
print(sample)
break
Dataset Structure
Each line in vsi_590k.jsonl is a JSON object with this structure:
{
"conversations": [
{"from": "human", "value": "<image>\nQuestion text..."},
{"from": "gpt", "value": "Answer text"}
],
"question_type": "relative_direction_object", # e.g., "absolute_distance", "object_count"
"image": "path/to/image.jpg", # Optional: present if image sample
"video": "path/to/video.mp4" # Optional: present if video sample
}
Usage Examples
import json
def process_sample(sample):
"""Extract question and answer from conversation."""
question = sample['conversations'][0]['value'].replace('<image>', '').strip()
answer = sample['conversations'][1]['value']
return {
'question': question,
'answer': answer,
'question_type': sample['question_type'],
'media_path': sample.get('image') or sample.get('video')
}
# Process all samples
with open('vsi_590k.jsonl', 'r') as f:
for line in f:
sample = json.loads(line.strip())
processed = process_sample(sample)
# Use processed sample for training/inference
Dataset Details
- Total: 590,667 QA pairs
- Videos: 5,963 unique videos
- Images: 44,858 unique images
- Data Sources: 10 diverse sources (annotated real videos, simulated data, unannotated real videos)
Data Creation Pipeline
Annotated Real Videos: As proposed in VSI-Bench, multimodal visual-spatial reasoning requires 3D geometric and spatial understanding. We re-purpose the training split of existing indoor scans and ego-vision datasets containing 3D instance-level annotations, including S3DIS, ScanNet, ScanNet++ V2, ARKitScenes, and Aria Digital Twin. For each dataset, the annotations are organized into meta-information files containing scene attributes (object counts, bounding boxes, room size, etc.), and question templates are automatically propagated to generate questions.
Simulated Data: Given the scarce nature of 3D-annotated data, we leverage embodied simulators to programmatically generate spatially grounded video trajectories and QA pairs—rendering 625 video traversals through ProcTHOR scenes with diverse layouts, object placements, and appearances. We adapt the same methodology to Hypersim, sampling 5,113 images from 461 indoor scenes. Given instance-level bounding boxes, we construct supervision consistent with our annotated real-video setup.
Unannotated Real Videos: Web-sourced videos provide rich diversity in room types, regions, and layouts. We web-crawled around 19K room tour videos from YouTube, and also sourced videos from the robotic learning datasets Open-X-Embodiment and AgiBot-World. Since these videos lack 3D annotations, we build a pseudo-annotation pipeline: we subsample and filter frames, applying powerful image detection, segmentation, and video reconstruction models to generate pseudo-annotated images. We generate pseudo-annotations on images instead of videos because pseudo-annotations from semantic extraction and reconstruction models on full videos are too noisy to serve as training data.
Citation
@article{yang2025cambrian,
title={Cambrian-S: Towards Spatial Supersensing in Video},
author={Yang, Shusheng and Yang, Jihan and Huang, Pinzhi and Brown, Ellis and Yang, Zihao and Yu, Yue and Tong, Shengbang and Zheng, Zihan and Xu, Yifan and Wang, Muhan and Lu, Danhao and Fergus, Rob and LeCun, Yann and Fei-Fei, Li and Xie, Saining},
journal={arXiv preprint arXiv:2511.04670},
year={2025}
}
- Downloads last month
- 385