GihhArwtw's picture
update readmes and eval script; the same with official DriveLM track
b5b5498
|
raw
history blame
3.42 kB

Driving with Language Official Leaderboard

Overview

Welcome to official leaderboard of driving with language.

Incorporating the language modality, this task connects Vision Language Models (VLMs) and autonomous driving systems. The model will introduce the reasoning ability of LLMs to make decisions, and pursue generalizable and explainable driving behavior. Given multi-view images as inputs, models are required to answer questions covering various aspects of driving.

Besides the official leaderboard, if you want to participate in the PRCV driving-with-language challenge, it is a strict requirement to register for your team by filling in this Google Form. The registration information can be edited till TBA. If you just want to submit your result on the official leaderboard, you can ignore this googel form.

If you want to participate in the PRCV driving-with-language challenge, please follow PRCV challenge general rules. If you just want to submit your result to the official leaderboard, please check the general rules and track details. For now we inherit the general rules from the CVPR AGC 2024.

Specific Rules

  • We do not restrict the input modality and history frames for the inference of the model, while we don't allow using any human-labelled annotation as well as nusc provided ground truth annotation (including but not limited to bbox, map, lidar seg). Also please note that our baseline model only uses camera input.
  • Using offline label from the question text is prohibited. Please see the statement.

Important Dates

Important Event Date
Test Server Open July 12, 2024
Leaderborad Public July 12, 2024
Test Server Close TBA

Baseline

Everything you need is in our DriveLM Challenge repo.

Dataset

This track is based on the DriveLM dataset we proposed. Please refer to the Dataset tab of the competition space.

Primary Metrics

  • Language Evaluation

    • Submetrics (BLEU, ROUGE_L, CIDEr), used for the evaluation of various unsupervised automated metrics for Natural Language Generation (NLG) code.
  • Accuracy

    • Ratio of correctly predicted samples to the total number of samples.
  • ChatGPT Score

    • Using ChatGPT to give the score between ground truth and predicted answers.
  • Match Score

    • Ratio of the number of correctly predicted important objects over the total number of objects.

We weighted and averaged several of the previous scores to get the final score, with ChatGPT Score, Language Score, Match Score and Accuracy having a weight of 0.4, 0.2, 0.2 and 0.2 respectively.

Submission Instructions

The participants are expected to submit their predictions as a huggingface model hub repo. Please refers to Submision Information for detailed steps.