Datasets:

maysonma commited on
Commit
339f0ba
1 Parent(s): f7b639a

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +23 -12
  3. data/train/images/train.json +3 -0
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  *.mp4 filter=lfs diff=lfs merge=lfs -text
58
  *.webm filter=lfs diff=lfs merge=lfs -text
59
  data/train/train.json filter=lfs diff=lfs merge=lfs -text
 
 
57
  *.mp4 filter=lfs diff=lfs merge=lfs -text
58
  *.webm filter=lfs diff=lfs merge=lfs -text
59
  data/train/train.json filter=lfs diff=lfs merge=lfs -text
60
+ data/train/images/train.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -15,16 +15,27 @@ dataset_info:
15
  sequence: string
16
  - name: question_type
17
  sequence: string
18
- splits:
19
- - name: train
20
- num_bytes: 12919306
21
- num_examples: 7536
22
- - name: validation
23
- num_bytes: 2557027
24
- num_examples: 1500
25
- - name: test
26
- num_bytes: 2568380
27
- num_examples: 1500
28
- download_size: 4096
29
- dataset_size: 18044713
30
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  sequence: string
16
  - name: question_type
17
  sequence: string
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
+
20
+ # Official MapLM-v1.5 Dataset Release for "MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding"
21
+
22
+ ## Dataset Access
23
+
24
+ Due to the large size of the dataset and limitations with the Hugging Face Datasets library, the training set is not uploaded directly here. However, it can be downloaded from [this link](https://purdue0-my.sharepoint.com/:u:/g/personal/yunsheng_purdue_edu/Ee4a-FKaqh1Cq-bNW49zKq0BM8XOquOAkPFvxYiis89OTg?e=28gDyC).
25
+
26
+ Additionally, we provide a custom data loader based on the Hugging Face Datasets library, available in the `maplm_v1_5.py` file.
27
+
28
+ ## Challenge Overview
29
+
30
+ The MAPLM-QA Challenge Track is based on a subset of the MAPLM dataset, specifically designed for Visual Question Answering (VQA) in the context of traffic scene understanding. Participants are invited to develop innovative methods to accurately answer multiple-choice questions about complex traffic scenes, using high-resolution panoramic images and 2.5D bird’s-eye view (BEV) representations. Top-performing teams will be recognized with certificates and honorariums.
31
+
32
+ ## Evaluation
33
+
34
+ To evaluate different VQA baselines for the MAPLM-QA task, we have categorized the question-answer pairs into two types: Open QA and Fine-grained QA. The challenge will focus on Fine-grained QA questions, which are treated as a multi-class classification problem with multiple options. These will be evaluated using the correct ratio as the accuracy metric, covering four categories: LAN, INT, QLT, and SCN.
35
+
36
+ In addition to evaluating individual items, we employ two overall metrics:
37
+
38
+ - **Frame-Overall Accuracy (FRM):** This metric is set to 1 if all Fine-grained QA questions are answered correctly for a given frame; otherwise, it is 0.
39
+ - **Question-Overall Accuracy (QNS):** This metric is the average correct ratio across all questions.
40
+
41
+ For more details, please refer to the [MAPLM paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Cao_MAPLM_A_Real-World_Large-Scale_Vision-Language_Benchmark_for_Map_and_Traffic_CVPR_2024_paper.pdf).
data/train/images/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96197371fe175acae295a323543ee83bcc365055f6fed7536ca3c6e649024c4e
3
+ size 28603760