thomasrantian commited on
Commit
61b76b5
·
1 Parent(s): 45707a3

update doc

Browse files
Files changed (3) hide show
  1. COMPETITION_DESC.md +1 -1
  2. DATASET_DESC.md +11 -7
  3. RULES.md +4 -2
COMPETITION_DESC.md CHANGED
@@ -42,7 +42,7 @@ To correctly parse the driving behavior plan and the motion plan from the model'
42
 
43
  | Important Event | Date |
44
  | --- | --- |
45
- | **Test Server Open for Initial Test** | Jan 15, 2025 |
46
  <!-- | **Leaderborad Public** | July 12, 2024 |
47
  | **Test Server Close** | TBA | -->
48
 
 
42
 
43
  | Important Event | Date |
44
  | --- | --- |
45
+ | **Test Server Open for Initial Test** | Jan 20, 2025 |
46
  <!-- | **Leaderborad Public** | July 12, 2024 |
47
  | **Test Server Close** | TBA | -->
48
 
DATASET_DESC.md CHANGED
@@ -1,16 +1,20 @@
1
- # DriveLM for Driving with Language
2
- - <a href="https://github.com/OpenDriveLab/DriveLM" target="_blank">Github</a> | <a href="https://arxiv.org/abs/2312.14150" target="_blank">Paper</a>
3
 
4
- - Point of contact: [Chonghao (司马崇昊)](mailto:chonghaosima@gmail.com)
 
5
 
6
- ## Dataset Description
7
 
8
- Please visit <a href="https://github.com/OpenDriveLab/DriveLM" target="_blank">DriveLM: Driving with Graph Visual Question Answering</a> for details.
9
 
10
- ## Dataset Download
 
 
 
 
11
 
12
  <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">The baseline code</a> can run on both full-scale and demo train data. All the code in the challenge repo (including train / infer the baseline, and evaluation) supports demo train data (which is in the same format of the full-scale trian data).
13
 
14
  For dataset download, you can visit the following pages.
15
 
16
- - <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge#drivelm" target="_blank">DriveLM-nuScenes version-1.1 dataset download</a>.
 
1
+ # Long-tail Planning with Language
 
2
 
3
+ Our evaluation dataset shares the same format as the DriveLM dataset and can be downloaded here: xxxx.
4
+ Note that the evaluation data file is the same as the `DriveLM-nuScenes version-1.1 val` (the data file used in the `driving with language` track). We directly added our long-tail planning QAs into each evaluation keyframe (if the scenario is a long-tail event).
5
 
6
+ <!-- - <a href="https://github.com/OpenDriveLab/DriveLM" target="_blank">Github</a> | <a href="https://arxiv.org/abs/2312.14150" target="_blank">Paper</a>
7
 
8
+ - Point of contact: [Chonghao (司马崇昊)](mailto:chonghaosima@gmail.com) -->
9
 
10
+ <!-- ## Dataset Description
11
+
12
+ Please visit <a href="https://github.com/OpenDriveLab/DriveLM" target="_blank">DriveLM: Driving with Graph Visual Question Answering</a> for details. -->
13
+
14
+ <!-- ## Dataset Download
15
 
16
  <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">The baseline code</a> can run on both full-scale and demo train data. All the code in the challenge repo (including train / infer the baseline, and evaluation) supports demo train data (which is in the same format of the full-scale trian data).
17
 
18
  For dataset download, you can visit the following pages.
19
 
20
+ - <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge#drivelm" target="_blank">DriveLM-nuScenes version-1.1 dataset download</a>. -->
RULES.md CHANGED
@@ -1,10 +1,12 @@
1
  # Rules
2
 
3
- ## General Rules
 
 
4
 
5
  If you just want to submit your result to the official leaderboard, please check the <a href="https://opendrivelab.com/challenge2024/#general_rules" target="_blank">general rules</a> and <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">track details</a>. For now we inherit the general rules from the CVPR AGC 2024.
6
 
7
  ## Specific Rules
8
 
9
  - We do not restrict the input modality and history frames for the inference of the model, while we don't allow using any human-labelled annotation as well as nusc provided ground truth annotation (including but not limited to bbox, map, lidar seg). Also please note that our baseline model only uses camera input.
10
- - Using offline label from the question text is prohibited. Please see the <a href="https://docs.google.com/document/d/1QguVBhv03lIEsbrNOKqx5MyDpaS-fPxhjEWR39pBMTw/edit#heading=h.42gawos02r5l" target="_blank">statement</a>.
 
1
  # Rules
2
 
3
+ We will update the rules soon.
4
+
5
+ <!-- ## General Rules
6
 
7
  If you just want to submit your result to the official leaderboard, please check the <a href="https://opendrivelab.com/challenge2024/#general_rules" target="_blank">general rules</a> and <a href="https://github.com/OpenDriveLab/DriveLM/tree/main/challenge" target="_blank">track details</a>. For now we inherit the general rules from the CVPR AGC 2024.
8
 
9
  ## Specific Rules
10
 
11
  - We do not restrict the input modality and history frames for the inference of the model, while we don't allow using any human-labelled annotation as well as nusc provided ground truth annotation (including but not limited to bbox, map, lidar seg). Also please note that our baseline model only uses camera input.
12
+ - Using offline label from the question text is prohibited. Please see the <a href="https://docs.google.com/document/d/1QguVBhv03lIEsbrNOKqx5MyDpaS-fPxhjEWR39pBMTw/edit#heading=h.42gawos02r5l" target="_blank">statement</a>. -->