metadata
license: cc-by-nc-3.0
MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding
This is the version 2.0 for WACV 2025 LLVM-AD Challenge.
Tencent, University of Illinois at Urbana-Champaign, Purdue University, University of Virginia
Dataset Structure
----data
|----images
| |----FR1
| | |----photo_forward.jpg
| | |----photo_lef_back.jpg
| | |----photo_rig_back.jpg
| | |----point_cloud_bev.jpg
| |----FR2
| | |----photo_forward.jpg
| | |----photo_lef_back.jpg
| | |----photo_rig_back.jpg
| | |----point_cloud_bev.jpg
| ...
|----train_v2.json
|----val_v2.json
|----test_v2.json
Input
The input data includes the forward view and back left/right view of the scene.
And the projected BEV image of the 3D point cloud.
All of our data are following the standrad to product HD map.
Reference
When using this resource, please cite:
@inproceedings{cao2024maplm,
title={MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding},
author={Cao, Xu and Zhou, Tong and Ma, Yunsheng and Ye, Wenqian and Cui, Can and Tang, Kun and Cao, Zhipeng and Liang, Kaizhao and Wang, Ziran and Rehg, James M and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21819--21830},
year={2024}
}
@inproceedings{tang2023thma,
title={Thma: Tencent hd map ai system for creating hd map annotations},
author={Tang, Kun and Cao, Xu and Cao, Zhipeng and Zhou, Tong and Li, Erlong and Liu, Ao and Zou, Shengtao and Liu, Chang and Mei, Shuqi and Sizikova, Elena and others},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={37},
number={13},
pages={15585--15593},
year={2023}
}