Datasets:

Modalities:
Text
Formats:
arrow
Languages:
English
ArXiv:
Libraries:
Datasets
License:
nuscenes-qa-mini / README.md
KevinNotSmile's picture
update license
0536018 verified
---
license: cc-by-nc-sa-4.0
task_categories:
- visual-question-answering
- text-generation
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: day
data_files:
- split: train
path: "day-train/*"
- split: validation
path: "day-validation/*"
- config_name: night
data_files:
- split: train
path: "night-train/*"
- split: validation
path: "night-validation/*"
---
# NuScenes-QA-mini Dataset
## TL;DR:
This dataset is used for multimodal question-answering tasks in autonomous driving scenarios. We created this dataset based on [nuScenes-QA dataset](https://github.com/qiantianwen/NuScenes-QA) for evaluation in our paper [Modality Plug-and-Play: Elastic Modality Adaptation in Multimodal LLMs for Embodied AI](https://arxiv.org/abs/2312.07886). The samples are divided into day and night scenes.
|scene|# train samples|# validation samples|
|---|---|---|
|day|2,229|2,229|
|night|659|659|
|Each sample contains|
|---|
|original token id in nuscenes database|
|RGB images from 6 views (front, front left, front right, back, back left, back right)|
|5D LiDAR point cloud (distance, intensity, X, Y, and Z axes)|
|question-answer pairs|
## Detailed Description
This dataset is built on the [nuScenes](https://www.nuscenes.org/) mini-split, where we obtain the QA pairs from the original [nuScenes-QA dataset](https://github.com/qiantianwen/NuScenes-QA). The data in the nuScenes-QA dataset is collected from driving scenes in cities of Boston and Singapore with diverse locations, time, and weather conditions.
<img src="nuqa_example.PNG" alt="fig1" width="600"/>
Each data sample contains **6-view RGB camera captures, a 5D LiDAR point cloud, and a corresponding text QA pair**. Each LiDAR point cloud includes 5 dimensions of data about distance, intensity, X, Y, and Z axes. In this dataset, the questions are generally difficult, and may require multiple hops of reasoning over the RGB and LiDAR data. For example, to answer the sample question in the above figure, the ML model needs to first identify in which direction the “construction vehicle” appears, and then counts the number of “parked trucks” in that direction. In our evaluations, we further cast the question-answering (QA) as an open-ended text generation task. This is more challenging than the evaluation setup in the original nuScenes-QA [paper](https://arxiv.org/abs/2305.14836), where an answer set is predefined and the QA task is a classification task over this predefined answer set.
<img src="image_darken.PNG" alt="fig2" width="600"/>
In most RGB images in the nuScenes dataset, as shown in the above figure - Left, the lighting conditions in night scenes are still abundant (e.g., with street lights), and we hence further reduce the brightness of RGB captures in night scenes by 80% and apply Gaussian blur with a radius of 7, as shown in the above figure - Right. By applying such preprocessing to the RGB views in night scenes, we obtain the training and validation splits of night scenes with 659 samples for each split. On the other hand, the RGB views in daytime scenes remain as the origin. The day split contains 2,229 for training and 2,229 for validation respectively.
## How to Use
```py
from datasets import load_dataset
# load train split in day scene
day_train = load_dataset("KevinNotSmile/nuscenes-qa-mini", "day", split="train")
```
## Citation
If you find our dataset useful, please consider citing
```
@inproceedings{caesar2020nuscenes,
title={nuscenes: A multimodal dataset for autonomous driving},
author={Caesar, Holger and Bankiti, Varun and Lang, Alex H and Vora, Sourabh and Liong, Venice Erin and Xu, Qiang and Krishnan, Anush and Pan, Yu and Baldan, Giancarlo and Beijbom, Oscar},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={11621--11631},
year={2020}
}
@article{qian2023nuscenes,
title={NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario},
author={Qian, Tianwen and Chen, Jingjing and Zhuo, Linhai and Jiao, Yang and Jiang, Yu-Gang},
journal={arXiv preprint arXiv:2305.14836},
year={2023}
}
@article{huang2023modality,
title={Modality Plug-and-Play: Elastic Modality Adaptation in Multimodal LLMs for Embodied AI},
author={Huang, Kai and Yang, Boyuan and Gao, Wei},
journal={arXiv preprint arXiv:2312.07886},
year={2023}
}
```
License
===================================================================================================
[![CC BY-NC-SA 4.0][cc-by-nc-sa-shield]][cc-by-nc-sa]
Being aligned with original nuScenes' license, this work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg