|
--- |
|
dataset_info: |
|
config_name: v1.5 |
|
features: |
|
- name: frame_id |
|
dtype: string |
|
- name: images |
|
sequence: string |
|
- name: question |
|
sequence: string |
|
- name: options |
|
sequence: |
|
sequence: string |
|
- name: answer |
|
sequence: string |
|
- name: question_type |
|
sequence: string |
|
--- |
|
|
|
# Official MapLM-v1.5 Dataset Release for "MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding" |
|
|
|
## Dataset Access |
|
|
|
Due to the large size of the dataset and limitations with the Hugging Face Datasets library, the training set is not uploaded directly here. However, it can be downloaded from [this link](https://purdue0-my.sharepoint.com/:u:/g/personal/yunsheng_purdue_edu/Ee4a-FKaqh1Cq-bNW49zKq0BM8XOquOAkPFvxYiis89OTg?e=28gDyC). |
|
|
|
Additionally, we provide a custom data loader based on the Hugging Face Datasets library, available in the `maplm_v1_5.py` file. |
|
|
|
## Challenge Overview |
|
|
|
The MAPLM-QA Challenge Track is based on a subset of the MAPLM dataset, specifically designed for Visual Question Answering (VQA) in the context of traffic scene understanding. Participants are invited to develop innovative methods to accurately answer multiple-choice questions about complex traffic scenes, using high-resolution panoramic images and 2.5D bird’s-eye view (BEV) representations. Top-performing teams will be recognized with certificates and honorariums. |
|
|
|
## Evaluation |
|
|
|
To evaluate different VQA baselines for the MAPLM-QA task, we have categorized the question-answer pairs into two types: Open QA and Fine-grained QA. The challenge will focus on Fine-grained QA questions, which are treated as a multi-class classification problem with multiple options. These will be evaluated using the correct ratio as the accuracy metric, covering four categories: LAN, INT, QLT, and SCN. |
|
|
|
In addition to evaluating individual items, we employ two overall metrics: |
|
|
|
- **Frame-Overall Accuracy (FRM):** This metric is set to 1 if all Fine-grained QA questions are answered correctly for a given frame; otherwise, it is 0. |
|
- **Question-Overall Accuracy (QNS):** This metric is the average correct ratio across all questions. |
|
|
|
For more details, please refer to the [MAPLM paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Cao_MAPLM_A_Real-World_Large-Scale_Vision-Language_Benchmark_for_Map_and_Traffic_CVPR_2024_paper.pdf). |
|
|