Long-tail Planning with Language Official Leaderboard
Overview
Welcome to the official leaderboard for long tail planning with language
.
The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design. Traditional end-to-end driving models, however, suffer from long-tail events due to rare or unseen inputs within their training distributions. Multi-Modal Large Language Models (MM-LLMs), which naturally integrate various data modalities and are trained with world knowledge, are emerging as promising foundations for developing autonomy stacks in autonomous vehicles.
This challenge is a sub-track of driving with language
, and aims to provide a benchmark for leveraging common-sense reasoning capabilities of MM-LLMs for effective planning in long-tail driving scenarios (e.g., navigating around construction sites, overtaking parked cars through the oncoming lane, etc.).
Evaluation Dataset
Our evaluation dataset shares the same format as the DriveLM dataset and can be downloaded here: v1_1_val_nus_q_only_with_long_tail.
Note that the evaluation data file is the same as the DriveLM-nuScenes version-1.1 val
(the data file used in the driving with language
track). We directly added our long-tail planning QAs into each evaluation keyframe (if the scenario is a long-tail event).
Long-tail events construction: We manually inspected the NuScenes dataset and identified the following long-tail scenarios for evaluation, each representing less than 1% of the training data: 1) executing 3-point turns; 2) resuming motion after a full stop; 3) overtaking parked cars through the oncoming lane; and 4) navigating around construction sites.
Evaluation question:
Our evaluation specifically assesses the model's ability to conduct Route-conditioned Hierarchical Planning
:
The model consumes visual observations (we do not restrict history frames for inference) and routing commands to reason about driving behavior plans and motion plans. Specifically, the model is instructed to progressively generate the driving plans (as text) in three steps. First, the model identifies the critical objects in the current driving scene, including their categories and 2D locations in the ego frame. Next, it proposes the desired behavior mode, detailing interaction plans with the critical objects (e.g., overtake) and lane-level decisions (e.g., left lane change). Finally, it generates a 3-second motion plan (6 waypoints).
Note that the coordinate system used in this track is the ego vehicle frame, i.e., the front direction is the y-axis, and the right direction is the x-axis.
Example question: You are the brain of an autonomous vehicle and try to plan a safe and efficient motion. The autonomous vehicle needs to keep forward along the road. What objects are important for the autonomous vehicle's planning? How to interact with them? Please plan the autonomous vehicle's 3-second future trajectory using 6 waypoints, one every 0.5 second..
Routing command:
Different from previous works that use the relative position of the ground-truth ego trajectory to define high-level commands ("keep forward" and "turn right/right"), we re-labeled the NuScenes dataset to use road-level navigation signals as high-level commands, including: "keep forward along the current road," "prepare to turn right/left at the next intersection," "turn right/left at the intersection," "left/right U-turn," and "left/right 3-point turn."
Evaluation metric:
We only evaluate the quality of the driving behavior plan and the accuracy of the final motion plan.
We will parse the driving behavior plan and the motion plan from the text output for evaluation. The driving behavior plan is evaluated using the GPT score (same as the driving with language
), i.e., the matching between the ground truth (GT) plan and the predicted plan evaluated by GPT. The accuracy of the final motion plan is evaluated using the standard L2 difference between the prediction and the GT motion plan.
Specific Requirements
To correctly parse the driving behavior plan and the motion plan from the model's text output for evaluation, we kindly ask that the output follows the format below:
- The behavior plan should be formatted as
The autonomous vehicle's 3-second future behavior plan is: "DRIVING BEHAVIOR PLAN".
We will extractDRIVING BEHAVIOR PLAN
for evaluation. - The motion plan should be formatted as
The autonomous vehicle's 3-second future trajectory is: [(x1, y1), (x2, y2), (x3, y3), (x4, y4), (x5, y5), (x6, y6)].
We will extract[(x1, y1), (x2, y2), (x3, y3), (x4, y4), (x5, y5), (x6, y6)]
for evaluation.
Sample output: There are 3 important objects: car at (3.2, 15.1), pedestrian at (-0.1, 7.0), pedestrian at (-0.7, 6.3). The autonomous vehicle's 3-second future behavior plan is: "The crossing pedestrians are nearly finished crossing the street and the road is clear to proceed. The autonomous should continue to drive, accelerate slightly and go straight". The autonomous vehicle's 3-second future trajectory is: [(0.0,0.0), (0.0,0.0), (0.0,-0.0), (0.0,0.0), (0.0,0.2), (0.0,0.8)].
Important Dates
Important Event | Date |
---|---|
Test Server Open for Initial Test | Jan 20, 2025 |
Baseline
Everything you need is in our DriveLM Challenge repo.
Submission Instructions
The participants are expected to submit their predictions as a huggingface model hub repo. Please refers to Submision Information for detailed steps.