thomasrantian commited on
Commit
f87c23a
·
1 Parent(s): b697719

update doc

Browse files
Files changed (2) hide show
  1. COMPETITION_DESC.md +2 -2
  2. SUBMISSION_DESC.md +10 -2
COMPETITION_DESC.md CHANGED
@@ -9,7 +9,7 @@ The autonomous driving industry is increasingly adopting end-to-end learning fro
9
  This challenge is a sub-track of `driving with language`, and aims to provide a benchmark for leveraging common-sense reasoning capabilities of MM-LLMs for effective planning in long-tail driving scenarios (e.g., navigating around construction sites, overtaking parked cars through the oncoming lane, etc.).
10
 
11
  ## Evaluation Dataset
12
- Our evaluation dataset shares the same format as the DriveLM dataset and can be downloaded here: xxxx.
13
  Note that the evaluation data file is the same as the `DriveLM-nuScenes version-1.1 val` (the data file used in the `driving with language` track). We directly added our long-tail planning QAs into each evaluation keyframe (if the scenario is a long-tail event).
14
 
15
  **Long-tail events construction**: We manually inspected the NuScenes dataset and identified the following long-tail scenarios for evaluation, each representing less than 1% of the training data: 1) executing 3-point turns; 2) resuming motion after a full stop; 3) overtaking parked cars through the oncoming lane; and 4) navigating around construction sites.
@@ -20,7 +20,7 @@ The model consumes visual observations (we do not restrict history frames for in
20
 
21
  <u>**Note that the coordinate system used in this track is the ego vehicle frame, i.e., the front direction is the y-axis, and the right direction is the x-axis.**</u>
22
 
23
- Example question: `You are the brain of an autonomous vehicle and are trying to plan a safe and efficient motion. The autonomous vehicle needs to turn right at the next intersection. What objects are important for the autonomous vehicle's planning? What are these objects' (x, y) locations, and how should the vehicle interact with them? Please plan the autonomous vehicle's 3-second future trajectory using 6 waypoints, one every 0.5 seconds.`
24
 
25
  **Routing command**:
26
  Different from previous works that use the relative position of the ground-truth ego trajectory to define high-level commands ("keep forward" and "turn right/right"), we re-labeled the NuScenes dataset to use road-level navigation signals as high-level commands, including: "keep forward along the current road," "prepare to turn right/left at the next intersection," "turn right/left at the intersection," "left/right U-turn," and "left/right 3-point turn."
 
9
  This challenge is a sub-track of `driving with language`, and aims to provide a benchmark for leveraging common-sense reasoning capabilities of MM-LLMs for effective planning in long-tail driving scenarios (e.g., navigating around construction sites, overtaking parked cars through the oncoming lane, etc.).
10
 
11
  ## Evaluation Dataset
12
+ Our evaluation dataset shares the same format as the DriveLM dataset and can be downloaded here: [v1_1_val_nus_q_only_with_long_tail](https://mega.nz/file/AS8USBCY#6Oqnnwz7E1z1pRuqWlqD4YGC1uM6yCaKuRwUO8dTG8I).
13
  Note that the evaluation data file is the same as the `DriveLM-nuScenes version-1.1 val` (the data file used in the `driving with language` track). We directly added our long-tail planning QAs into each evaluation keyframe (if the scenario is a long-tail event).
14
 
15
  **Long-tail events construction**: We manually inspected the NuScenes dataset and identified the following long-tail scenarios for evaluation, each representing less than 1% of the training data: 1) executing 3-point turns; 2) resuming motion after a full stop; 3) overtaking parked cars through the oncoming lane; and 4) navigating around construction sites.
 
20
 
21
  <u>**Note that the coordinate system used in this track is the ego vehicle frame, i.e., the front direction is the y-axis, and the right direction is the x-axis.**</u>
22
 
23
+ Example question: `You are the brain of an autonomous vehicle and try to plan a safe and efficient motion. The autonomous vehicle needs to keep forward along the road. What objects are important for the autonomous vehicle's planning? How to interact with them? Please plan the autonomous vehicle's 3-second future trajectory using 6 waypoints, one every 0.5 second..`
24
 
25
  **Routing command**:
26
  Different from previous works that use the relative position of the ground-truth ego trajectory to define high-level commands ("keep forward" and "turn right/right"), we re-labeled the NuScenes dataset to use road-level navigation signals as high-level commands, including: "keep forward along the current road," "prepare to turn right/left at the next intersection," "turn right/left at the intersection," "left/right U-turn," and "left/right 3-point turn."
SUBMISSION_DESC.md CHANGED
@@ -2,7 +2,15 @@
2
 
3
  ## Submission Instruction
4
 
5
- We use the same submission pipeline as the `driving with language` track.
 
 
 
 
 
 
 
 
6
 
7
  Please refer to [challenge README](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/README.md) on Github to prepare data and train your model.
8
  <!-- Please evaluate your [output.json](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/output.json) locally first before submitting to test server. -->
@@ -84,7 +92,7 @@ You should first refer to this [location](https://github.com/OpenDriveLab/DriveL
84
 
85
  ### Finally, which dataset do we submit to the competition?
86
 
87
- Please refrain from using demo data. Instead, utilize the [validation data](https://drive.google.com/file/d/1fsVP7jOpvChcpoXVdypaZ4HREX1gA7As/view?usp=sharing) for inference and submission to the evaluation server.
88
 
89
  ### I encountered KeyError: 'b789de07180846cc972118ee6d1fb027_b0e6fd5561454b2789c853e5350557a8_0' in my Submission Comment, what should I do?
90
  If you saw a random UUID in Submission Comment, the error happens on [this line](https://github.com/OpenDriveLab/DriveLM/blob/030265cb243dd5b88bd0e20130c1a72e68bcf14e/challenge/evaluation.py#L178), you can try to reproduce this error locally. Most likely, this is due to not using the validation data we mentioned above.
 
2
 
3
  ## Submission Instruction
4
 
5
+ We use the same submission pipeline as the `driving with language` track. If your model can run inference using the evaluation data in the driving with language track, it should work in this track as well.
6
+
7
+ Steps for Submission:
8
+ 1. Download the evaluation dataset.
9
+ 2. Run inference using the dataset.
10
+ 3. Format the inference results following the requirements outlined.
11
+ 4. Run prepare_submission.py and submit the final output file to the server.
12
+
13
+ ## Detailed instructions
14
 
15
  Please refer to [challenge README](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/README.md) on Github to prepare data and train your model.
16
  <!-- Please evaluate your [output.json](https://github.com/OpenDriveLab/DriveLM/blob/main/challenge/output.json) locally first before submitting to test server. -->
 
92
 
93
  ### Finally, which dataset do we submit to the competition?
94
 
95
+ Please refrain from using demo data. Instead, utilize the [v1_1_val_nus_q_only_with_long_tail](https://mega.nz/file/AS8USBCY#6Oqnnwz7E1z1pRuqWlqD4YGC1uM6yCaKuRwUO8dTG8I) for inference and submission to the evaluation server.
96
 
97
  ### I encountered KeyError: 'b789de07180846cc972118ee6d1fb027_b0e6fd5561454b2789c853e5350557a8_0' in my Submission Comment, what should I do?
98
  If you saw a random UUID in Submission Comment, the error happens on [this line](https://github.com/OpenDriveLab/DriveLM/blob/030265cb243dd5b88bd0e20130c1a72e68bcf14e/challenge/evaluation.py#L178), you can try to reproduce this error locally. Most likely, this is due to not using the validation data we mentioned above.