doc_id
stringlengths 36
36
| contents
stringlengths 22
3.25k
| metadata
dict |
---|---|---|
6f86094c-47fe-43de-a77a-e8c34c69c997 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
Jianhao Yuan1, Shuyang Sun1, Daniel Omeiza1, Bo Zhao2, Paul Newman1, Lars Kunze1, Matthew Gadd1
1 University of Oxford 2 Beijing Academy of Artificial Intelligence
{jianhaoyuan,kevinsun,daniel,pnewman,lars,mattgadd}@robots.ox.ac.uk
Abstract—Robots powered by 'blackbox' models need to provide
human-understandable explanations which we can trust. Hence,
explainability plays a critical role in trustworthy autonomous
decision-making to foster transparency and acceptance among
end users, especially in complex autonomous driving. Recent
advancements in Multi-Modal Large Language models (MLLMs)
have shown promising potential in enhancing the explainability
as a driving agent by producing control predictions along with
natural language explanations. However, severe data scarcity
due to expensive annotation costs and significant domain gaps
between different datasets makes the development of a robust and
generalisable system an extremely challenging task. Moreover, the
prohibitively expensive training requirements of MLLM and the
unsolved problem of catastrophic forgetting further limit their
generalisability post-deployment. To address these challenges, we
present RAG-Driver, a novel retrieval-augmented multi-modal
large language model that leverages in-context learning for high-
performance, explainable, and generalisable autonomous driving.
By grounding in retrieved expert demonstration, we empirically
validate that RAG-Driver achieves state-of-the-art performance in
producing driving action explanations, justifications, and control
signal prediction. More importantly, it exhibits exceptional zero-
shot generalisation capabilities to unseen environments without
further training endeavours1.
Index Terms—Autonomous driving, multi-modal language
model, end-to-end driving, domain generalisation | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cf485ad0-8ec4-4a63-a0c6-5d7eb499c0c8 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## I. Introduction
Driven by the emerging development of deep learning, autonomous driving has observed a paradigm shift from rulesbased decision systems [66, 21] to data-driven learning-based approaches [28, 6, 36]. However, this comes at the cost of transparency in decision-making, especially for end-to-end autonomous driving systems which are considered black-box in nature [13]. Thus, in addition to precision in action control, explanation provision is key in ensuring trustworthy decisionmaking to reconcile the system's decisions with end-user expectations to foster confidence and acceptance [79, 8, 57] in dynamic driving environments.
Traditional approaches have mainly relied on attention visualisation [5, 7, 55] as a proxy to rationalise the decisions of the black-box systems or auxiliary intermediate tasks such as semantic segmentation [25, 32], object detection [16, 31], and affordance prediction [68, 45] provide meaningful intermediate representation for decision-making. However, these methods do not engage end-users in the dialogue as they are onedirectional and not readily comprehensible by the general users for the purpose of fostering trust and confidence. An alternative promising approach is the integration of natural language explanations [38, 33, 54], in particular through Multi-Modal Large Language Models (MLLMs) [1, 70]. These models, pretrained on extensive web-scale datasets, demonstrate remarkable reasoning capacity, enabling the transformation of complex vehicular decision-making processes into more understandable narrative formats, thereby offering a new layer of explainability to conventional systems.
While several early attempts have demonstrated the potential of MLLMs as general explainable driving agents [78, 76, 51], these methods fall short of human-level understanding. One of the limitations is their failure to generalise to unseen environments. A primary obstacle is the lack of high-quality annotated data [56], coupled with the significant domain shift across various datasets [23], which hinders the models' generalisation capacity to novel environments outside of the training data distribution. Another critical challenge is the prohibitively expensive training requirement and the unsolved problem of catastrophic forgetting [39], which make re-training or finetuning impractical solutions due to the immense computational demands and severe performance degradation. Consequently, this further limits the models' generalisability after deployment, as they struggle to effectively utilise new data in constantly evolving environments and driving scenarios.
To address these challenges, we introduce *RAG-Driver*, a novel retrieval-augment | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f60a5f7d-00dd-47bd-9b9f-55b76d1a571f | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## I. Introduction
56], coupled with the significant domain shift across various datasets [23], which hinders the models' generalisation capacity to novel environments outside of the training data distribution. Another critical challenge is the prohibitively expensive training requirement and the unsolved problem of catastrophic forgetting [39], which make re-training or finetuning impractical solutions due to the immense computational demands and severe performance degradation. Consequently, this further limits the models' generalisability after deployment, as they struggle to effectively utilise new data in constantly evolving environments and driving scenarios.
To address these challenges, we introduce *RAG-Driver*, a novel retrieval-augmented multi-modal large language model tailored for generalisable and explainable end-to-end driving.
As illustrated in Fig.1, it outputs natural language texts corresponding to **(1)** the driving action and **(2)** justification of that driving action along with **(3)** numerical control signals based on driving videos. The natural language texts are aligned with control signals during in-context learning [10] to enable faithful introspective explanation provision. The novelty of RAG-Driver is the integration of retrieval-augmented in-context learning (RA-ICL) mechanisms that significantly improve generalisation performance in unseen driving environments. It allows efficient recall of similar driving scenarios as contextual information augmenting MLLM prediction through implicit meta-optimisation (Sec. III-C). Through extensive experiments, we show that *RAG-Driver* outperforms existing methods on both in-domain deployments as well as deployment in unseen environments (without any fine-tuning). By being grounded in analogical demonstrations, our framework significantly reduces the need for continuous retraining while enhancing the generalisability and quality of generated explanatory texts. Our primary contributions are as follows:
1) Proposing a novel retrieval-augmented in-context learning
method for Multi-Modal Large Language Model (MLLM) based generalisable and explainable driving.
2) Achieving state-of-the-art introspective driving explanation performance on the standard benchmark BDD-
X [38].
3) Demonstrating exceptional zero-shot generalisation capacity to unseen scenarios without training effort through
a customised dataset *Spoken-SAX* featuring video sequences narrated by a professional driving instructor. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ddf70265-668c-4d1e-a9eb-89c2130c0ab5 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## Ii. Related Work A. Explainable End-To-End Autonomous Driving
End-to-end learned driving [13] maps directly from raw sensor input to vehicle control signals. This data-driven, joint optimisation of perception, prediction, and planning can be simple and efficient [28]. In this area, various learning-based approaches, including behavior cloning [6, 12, 61, 83, 60], inverse optimal control [81, 65, 75], and reinforcement learning [36, 71, 11, 44] are promising. A key area of focus in this area is explainability [80], which is crucial for improving transparency and building trust towards wider public acceptance of autonomous systems [54, 22, 57]. One line of works leverages attention visualisation - either to directly identify salient regions of input images that are important for driving decision-making [37, 5, 7] or to assist feature aggregation for downstream motion planning tasks [55, 15, 77, 63].
Another line of work uses intermediate auxiliary tasks such as semantic segmentation [25, 32], object detection [16, 31], and affordance prediction [68, 45] which help to decoder latent representations to human-understandable representation.
While these methods provide explainable mechanisms by associating decision-making processes with semantic or visual representations, they are not readily comprehensible by the general users for the purpose of fostering trust and confidence.
Alternatively, recent research shows promise in utilising natural language explanation. Several works develop specialist explainers [38, 41] using attention alignment in visual input and textual generation for grounded driving action explanation.
ADAPT [33] uses a vision-language transformer with separate decoder for caption generation alongside control signal prediction. More recently, several works explore the potential of Multi-modal Large Language Models (MLLMs). Works such as, DriveGPT4 [78], Lingo [54], and DrivingMLM [76] have shown promising potential in general question-answering for driving and action planning. However, a common obstacle in both specialist and MLLM-based generalist models is the scarcity of data due to expensive annotation costs and significant domain gaps between different datasets, making the development of a robust and generalisable model an extremely challenging task. In our work, we aim to overcome these obstacles by employing a more robust inference paradigm of retrieval-augmented in-context learning to bridge domain gaps and circumvent the need for annotations in new domains. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b3a710c0-0739-485b-b618-5023b4a09318 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## B. Multi-Modal Large Language Model
Recent advancements in Large Language Models (LLMs)
have paved the way for the emergence of Multi-modal Large Language Models (MLLMs) [1, 70]. Benefiting from scalable transformer-based architecture and web-scale training data, these models have demonstrated notable capabilities in generalpurpose visual understanding tasks. One line of work focuses on modality fusion in the latent space, offering a scalable, endto-end solution for MLLMs. For instance, Flamingo [3] and BLIP2 [43] fuse visual tokens to frozen LLM through gated attention and query transformers, respectively. LLaVA [48] and MiniGPT4 [86] use simple Multi-Layer Perceptron (MLP) with visual instruction tuning to align the pre-trained image encoder to LLM. Most relevant to us is the line of work focusing on video-language models such as Video-LLaVA [46] and Video-
LLaMA [82], which integrate pre-trained video encoders into LLMs using strategies similar to those in image-based models.
With remarkable perception and reasoning capacity, MLLMs reveal promising potential in various robotics tasks such as reasoning [20, 2, 30] and planning [62, 9, 47, 35]. The most similar to us is the idea that engages a generalist foundation model for end-to-end embodied agent. PaLM-e [20] injects images, state estimates, and other sensor modalities to LLM and autoregressively produces natural language commands. RT-2 [9] and RT-X [58] fine-tune on image and low-level robot control signal pairs to perform end-to-end robot control. Specifically in driving, numerous approaches leverage languageonly LLM for decision-making [79] and then augment it with external perception module feedback [53, 24, 14], designed chain-of-thought reasoning template [53, 52] or downstream planner [67, 69, 14] to form a system-level driving agent.
Another more relevant line of work is end-to-end driving agents. DriveGPT4 [78] leverages a finetuned video-language model Valley [50] on driving specific visual instruction tuning based on BDD-X [38]. Dolphins [51] further use the designed Grounded chain-of-thought to enhance reasoning capacity. DrivingMLM [76] and Reason2Drive [ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8cdc469e-cad8-41db-bbe6-621e78abf3fa | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## B. Multi-Modal Large Language Model
external perception module feedback [53, 24, 14], designed chain-of-thought reasoning template [53, 52] or downstream planner [67, 69, 14] to form a system-level driving agent.
Another more relevant line of work is end-to-end driving agents. DriveGPT4 [78] leverages a finetuned video-language model Valley [50] on driving specific visual instruction tuning based on BDD-X [38]. Dolphins [51] further use the designed Grounded chain-of-thought to enhance reasoning capacity. DrivingMLM [76] and Reason2Drive [56] scale up the driving visual instruction tuning dataset through simulator-based data engine and annotation of existing large-scale datasets, respectively. While these approaches have demonstrated the potentials of MLLM, the prohibitively expensive training cost and the unsolved problem of catastrophic forgetting, which makes re-training or fine-tuning post-deployment challenging, further limit their generalisation capacity to unseen driving environments. To solve this problem, we leverage a trainingfree retrieval-augmented in-context learning mechanism. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4a55f9ba-307c-41d7-9ee8-6da90b9b277f | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. In-Context Learning And Retrieval-Augmented Generation
While LLMs demonstrate strong generative and reasoning capacity, there are still several issues associated with their output, such as hallucination [29], and slow knowledge updates [34]. In-context Learning (ICL) [10, 18] has emerged as a promising approach in LLM inference, potentially addressing several of these issues. This paradigm involves providing a test query alongside a few demonstration examples as contextual information. The LLM then generates an output for the test instance based on analogies drawn from context, without any updates to its parameters [64]. While ICL has been observed to enhance generalisability in various Natural Language Processing (NLP) tasks, its application in multimodal contexts remains less explored, potentially due to the challenges associated with curating structured high-quality multi-modal ICL datasets. Retrieval-Augmented Generation
(RAG) [42] is another important inference paradigm for LLM.
It provides an external knowledge database to augment the compressed model knowledge within LLM in inference by dynamically retrieving relevant information pieces as contextual information. One of its promising applications is leveraging a systematic approach to curate In-Context Learning (ICL) examples. In this work, we build upon these inference paradigms and extend their application to Multimodal Large Language Models (MLLMs). We introduce a retrieval-augmented incontext learning mechanism through a curated multi-modal driving in-context instruction tuning dataset and a vector similarity-based retrieval engine specifically tailored for driving applications. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9a8dbd67-a0e4-409f-91d1-128f0c1222d9 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## Iii. Method
RAG-Driver is a retrieval-augmented, multi-modal large language model (MLLM) for generalisable explainable end-to-end driving. Its multi-tasking capabilities encompass three key areas:
(1) Action Explanation, providing a human-understandable driving action description; (2) Action Justification, elucidating the rationale behind specific driving actions; and (3) Next Control Signal Prediction, forecasting upcoming control signals in response to the driving conditions. As shown in Fig. 3, it is composed of two primary components: a unified perception and planning unit built upon an MLLM backbone and a memory unit built upon a hybrid vector and textual database. These components interact through a retrieval engine, enabling robust multi-modal in-context learning (ICL) during decision-making. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f5206e64-9ffa-466b-aff8-8221e5051355 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## A. Multi-Modal Large Language Model Architecture
Following the successful MLLM paradigm of Video-
LLaVA [46], we align visual and language embedding through visual instruction tuning. We leverage a pre-trained video encoder and LLM and then inject the video embedding into LLM through a MLP projector to build a fully differentiable MLLM.
Video Encoder We adopt the pre-trained LanguageBind video encoder [85] as our frozen visual backbone fv, which is based on a ViT-B/32 vision transformer [19]. As shown in Fig. 4, given an input video frame sequence Vi =
�
v1
i , v2
i , . . . , vk i
�
∈
R3×k×224×224, we first split the video into multiple temporal sequences, each consisting of patches that share the same spatial location across different frames. These patches are then transformed through a linear projection for a vision transformer to output video embedding zvo ∈ R2048×1024. The video encoder is pre-trained with video-language contrastive learning
(i.e. CLIP4clip [49]) without further fine-tuning.
Cross-Modality Projector We then leverage a two-layer MLP
to project and align the encoded video embedding zvo with language token embeddings zv ∈ R2048×4096.
$$f_{p}(\mathbf{z}_{vo})=\text{GELU}\left(W_{2}\cdot\text{GELU}\left(W_{1}\cdot\mathbf{z}_{vo}\right)\right)\tag{1}$$
In particular, the projector fp takes form in Eq. (1), where we use GELU [26] as an activation function. We train the projector with a two-stage training strategy as detailed in Sec. III-B.
Large Language Model Backbone Finally, the LLM takes in both aligned video embedding zv and language embedding zt of textual context information and task instruction to predict both textual action explanation and numerical control signal.
We adopt Vicuna 1.5 7B [84], which is instruction-tuned based on LLaMA2 [72] as our LLM backbone. For decoder-only LLM conditioned on the multi-modal contextual prefix z1: | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f2ec5109-229c-485d-8689-16d49f6b6591 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## A. Multi-Modal Large Language Model Architecture
26] as an activation function. We train the projector with a two-stage training strategy as detailed in Sec. III-B.
Large Language Model Backbone Finally, the LLM takes in both aligned video embedding zv and language embedding zt of textual context information and task instruction to predict both textual action explanation and numerical control signal.
We adopt Vicuna 1.5 7B [84], which is instruction-tuned based on LLaMA2 [72] as our LLM backbone. For decoder-only LLM conditioned on the multi-modal contextual prefix z1:n =
[zv, zt] of length N, the joint probability of output xn+1:L
as in Eq. (2), where Pθ is transformer-based LLM backbone parameterized by θ.
$$P(\mathbf{x}_{n+1:L}\mid\mathbf{z}_{1:n})=\prod_{l=n+1}^{L}P_{\theta}(\mathbf{x}_{l}\mid\mathbf{x}_{1:l-1},\mathbf{z}_{1:n})\tag{2}$$
Each output token xl is then sampled auto-regressively based on previous output and context and finally decode to language space through a text de-tokenizer. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
743d0fc1-7548-41e6-b05a-ba88868f428e | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## B. Training Strategy
Following the visual instruction tuning paradigm [48, 46], we employ a two-stage training strategy to progressively enable cross-modality alignment and multi-task driving capacity. In both stages, we leverage the same next-token prediction crossentropy loss as in Eq. (3) aiming to maximize the conditional answer likelihood in Eq. (2), where y is the ground truth token.
LCE = − i=n+1 yl log P(xl | z1:n) (3) L �
Pre-training In the first pre-training stage, we only train the cross-modality projector while freezing the visual encoder and LLM. We use a subset VIDAL-10M [85] which contains 3 million video-caption pairs. This achieves an alignment between visual and language features by projecting the pre-trained video embeddings into language tokens understandable by LLM.
Supervised In-context Instruction Tuning While state-of-theart LLM exhibits zero-shot ICL capability, several works [18] and our ablation study Sec. IV-D have shown further improvement when specifically trained on curated ICL demonstrations. In particular, we construct a multi-modal instruction tuning dataset with structured ICL examples based on the BDD-X dataset [38] resulting in 16K video question and answer pairs.
As shown in Fig. 5, for an 8-frame driving video sequence with associated control signals—speed, course, acceleration, and curvature as the current query, we use the retrieval mechanism as in Sec. III-C to retrieve 2 relevant driving experiences, which are then prefixed to the current query as ICL examples.
The dataset is tailored to support three distinct tasks, each represented through question-answer pairs in natural language. Note that, with (1) Action Explanation and (2) Justification naturally represented as natural language, (3) Control Signal prediction is also formed as language token prediction; this is feasible due to the distinct mapping of numerical values to specific tokens within the language model dictionary. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
61657740-f0b8-46a0-9710-91caf606b357 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Retrival-Augmented In-Context Learning
Another critical component of the system is the memory unit, which consists of a database and a retrieval engine. The database incorporates vectorised video embedding zvo, which is extracted with the same video encoder as in Sec. III-A and control signals c ∈ R28 directly from sensor recording. Each vector is uniquely associated with the corresponding human expert textual explanation and justification from train samples as in Sec. III-B.
Retrieval Mechanism To perform the retrieval, we first leverage a lightweight MLP projector of the same structure as in Eq. (1) to project the heterogeneous video and control signal embedding into the same hybrid embedding s ∈ R1024
through metric learning [27]. In particular, we adopt a triplet loss with Euclidean distance as shown in Eq. (4):
$${\cal L}_{Tri}(\mathbf{a},\mathbf{p},\mathbf{n})=\max(||\mathbf{a}-\mathbf{p}||_{2}-||\mathbf{a}-\mathbf{n}||_{2}+\mbox{margin},0)\tag{4}$$
where the positive (a, p) and negative pairs (n, p) between hybrid embedding s are selected based on text similarity (i.e., TF-IDF Score) of driving action and justification in the BDD-X
training set as we aim to form the metric space such that the scenarios lead to similar driving actions close together and vice versa. This approach addresses the limitations of relying solely on visual similarity, which we have empirically found can result in sub-optimal performance (Sec. IV-D), and also solves the problem of heterogeneous sensor inputs that are hard for similarity comparison. We then perform retrieval through an efficient vector similarity search. Given a query vector sq the cosine similarity between the query vector and each vector in the database is computed as follows:
Sc(sq, s(i)) = sq · s(i) ∥sq∥∥s(i)∥ (5)
Subsequently, we consistently select the two most relevant driving samples based on this similarity score. These samples represent the entire reasoning process, from contextual information to question-answer pairs, as illustrated in Fig. 5.
Retrieval Augmented In-Context Learning (RA-ICL) To | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1a02fa44-6d3c-42b8-b10a-70ce8d63ca40 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Retrival-Augmented In-Context Learning
retrieval through an efficient vector similarity search. Given a query vector sq the cosine similarity between the query vector and each vector in the database is computed as follows:
Sc(sq, s(i)) = sq · s(i) ∥sq∥∥s(i)∥ (5)
Subsequently, we consistently select the two most relevant driving samples based on this similarity score. These samples represent the entire reasoning process, from contextual information to question-answer pairs, as illustrated in Fig. 5.
Retrieval Augmented In-Context Learning (RA-ICL) To perform the RA-ICL, we prefix retrieved samples before the current query, facilitating an implicit gradient descent through the meta-optimiser of LLM, as proved in [17]. This approach is also applicable to MLLM with architecture specified in Sec. III-A. For a given transformer-based pre-trained MLLM modeling the prefix-conditioned output probability as in Eq. (2), consider one head of the multi-head attention in a single transformer block as follows:
FICL(z1:n) = Attention(*Q, K, V* ) (6) =WV [zicl; zq] Softmax di � (WKz1:n)T (WQz1:n) √ �
where z1:n = [zicl; zq] represented the conditional prefix consist with ICL example zicl and current query embeddings zq, respectively. W*Q,K,V* ∈ Rdi×do is the linear transformation on query, key, and value in attention block, respectively. Now, in [17] in-context learning was shown to effectively achieve a meta-optimisation to implicitly update the model with an estimated meta-gradient. We provide in our work a new, alternative derivation of this, supported by more recent work in [40] on the following *softmax-free* linear attention expressions FICL(z1:n) = WV [zicl; zq] (WK [zicl; zq])T WQz1:n
(7)
This can be simplified, with a more detailed derivation in Apx. D, as FICL(z1:n) = (∆WICL + WZSL)WQz1: | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b7bb70f6-6ff5-49ba-b881-931414658e73 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Retrival-Augmented In-Context Learning
an estimated meta-gradient. We provide in our work a new, alternative derivation of this, supported by more recent work in [40] on the following *softmax-free* linear attention expressions FICL(z1:n) = WV [zicl; zq] (WK [zicl; zq])T WQz1:n
(7)
This can be simplified, with a more detailed derivation in Apx. D, as FICL(z1:n) = (∆WICL + WZSL)WQz1:n
(8)
where we separate out the terms $W_{ZSL}$ independent of the ICL examples and dependent solely on the current query from those $\Delta W_{ICL}$ dependent on the ICL examples, given by:
$$\begin{split}\Delta W_{ICL}&=\sum_{i}\ W_{V}\mathbf{z}_{icl,i}(W_{K}\mathbf{z}_{icl,i})^{T}\\ W_{ZSL}&=W_{V}\mathbf{z}_{q}(W_{K}\mathbf{z}_{q})^{T}\end{split}\tag{9}$$
Now, with more detail in Apx. D, a forward-pass
$${\cal F}({\bf x})=(W_{0}+\Delta W){\bf x}\tag{10}$$
through a linear layer ${\cal F}$ after its weights $W_{0}$ have been updated by $\Delta W$ has the weight updates coupled to the input in the form
$$\Delta W{\bf x}=\sum_{i}\eta\frac{\partial L}{\partial{\bf y}}|_{{\bf y}_{i}}{\bf x}_{i}^{T}{\bf x}\tag{11}$$
where xi, yi are the (mini-batch) input and output to the layer that resulted in the weight update when backpropagating loss L and with learning rate η. Therefore we have a weighted sum of dot products, which is *akin to an attention mechanism*. Indeed, by inspection of similar dot-product expressions WV zicl,i(WKzicl | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e906ded7-684a-4f17-b300-11ad548a6ad2 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Retrival-Augmented In-Context Learning
\sum_{i}\eta\frac{\partial L}{\partial{\bf y}}|_{{\bf y}_{i}}{\bf x}_{i}^{T}{\bf x}\tag{11}$$
where xi, yi are the (mini-batch) input and output to the layer that resulted in the weight update when backpropagating loss L and with learning rate η. Therefore we have a weighted sum of dot products, which is *akin to an attention mechanism*. Indeed, by inspection of similar dot-product expressions WV zicl,i(WKzicl,i)T ↔ η ∂L
∂y |yixT
i x we note that we match the form for the linear layer above
$$(W_{Z S L}+\Delta W_{I C L})W_{Q}z_{1:n}\leftrightarrow(W+\Delta W){\bf x}$$
This can therefore be interpreted to say that the output of the attention is adjusted in a meta-optimal way to conform to the samples provided as input context, much like gradient descent on the linear layer would adjust that layer to conform to the mini-batch training data, but *crucially* in the case of RAG-Driver, without backpropagation.
RA-ICL serves as an efficient inference method boosting the performance of MLLM in explainable driving without further training effort, where we empirically verify it is extremely effective in boosting the model prediction performance and generalisation capacity. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c477fdd0-bd61-4aac-99c7-7a435e3700a5 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## Iv. Experiments A. Settings And Datasets
We empirically evaluate the proposed Retrieval-augmented In-Context Learning (RA-ICL) framework within the Multimodal Large Language Model (MLLM), targeting explainable driving applications. We aim to validate its efficacy in general driving scenarios with a focus on two main aspects: (1)
explainability in driving action explanation and justification.
(2) Control Signal Prediction. We conduct experiments with the BDD-X [38] dataset, which is a widely adopted benchmark in explainable driving, comprising 77-hour videos across the US under different road and weather conditions. We customize the format as shown in Fig. 5, resulting in 16,803 and 2,123 video question-answering pairs for training and testing, respectively. More importantly, we further explore the transfer learning capacity of zero-shot generalisation in unseen environments. We leverage customised dataset comprising 58 testing question-answering pairs, recorded in London, UK, presenting a significant distribution shift from the BDD-X dataset.
Benchmark Settings For all experiments, we train the MLLM
using the BDD-X training split. Subsequent evaluation on general explainability and control signal prediction capabilities tests are conducted on the BDD-X test split with the BDD-X training split as the memory database. For the transfer learning experiments, we employ the same foundational model and test it on the *Spoken-SAX* but the memory database is constructed using the BDD-X training split for zero-shot generalisation.
Implementation Detail For each of the driving videos, we uniformly sample the video to 8 frames and resize it to 224 ×
224 for all frames. For MLLM, we train the model for one and two epochs in the pre-training and fine-tuning stages, respectively. For the embedding projector, we train the model for 300 epochs. Further experiment implementation details are provided in Apx. B.
Evaluation Metric For the driving action description and justification tasks, we use the same metrics as [33] including 4-gram BLEU (B4) [59], METEOR (M) [4], and CIDEr (C) [73]. These metrics aim to evaluate text generation quality, with BLEU focusing on n-gram precision, METEOR incorporating semantic and syntactic nuances, and CIDEr emphasizing consensus and relevance in tasks like image caption | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ff933a31-24e9-4308-a3e5-545c19646a73 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## Iv. Experiments A. Settings And Datasets
the embedding projector, we train the model for 300 epochs. Further experiment implementation details are provided in Apx. B.
Evaluation Metric For the driving action description and justification tasks, we use the same metrics as [33] including 4-gram BLEU (B4) [59], METEOR (M) [4], and CIDEr (C) [73]. These metrics aim to evaluate text generation quality, with BLEU focusing on n-gram precision, METEOR incorporating semantic and syntactic nuances, and CIDEr emphasizing consensus and relevance in tasks like image captioning. Moreover, For the control signal evaluation, we again follow [33] and present Root Mean Square Error (RMSE) in both steering angle (◦) and speed (m s−1). We also present "tolerant accuracy" metrics, Aσ, representing the accuracy of predictions when binarised as within a tolerance threshold σ of the ground truth.
Baselines We compare with various driving action description and justification baseline methods such as videolanguage sequence-to-sequence based recurrent neural network S2VT [74] and visual-attention based convolutional neural network WAA [38]. Also, we compare with the state-of-theart explainable driving methods, including video transformerbased ADAPT [33] and visual instruction tuned MLLM DriveGPT4 [78] that is further capable of control signal prediction. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
81f6e47b-f7f4-4d92-a24d-ef00f465029f | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## B. Explainability In Driving Action And Justification
We begin by evaluating the quality and accuracy of explanations and justifications for driving actions separately.
As shown in upper part of Tab. I, our method demonstrates comparable performance to the state-of-the-art specialist method ADAPT [33], a characteristic not observed in previous MLLM-based methods. In particular, when compared with DriveGPT4 [78] which also uses MLLM with a similar architecture and number of parameters but incorporates the extra LLaVA-150K dataset [48] for visual instruction tuning, our approach, relying solely on the BDD-X dataset, outperforms it in terms of explainability. This is evidenced by an average performance improvement of 10.8% across all metrics. This underscores the effectiveness of ICL in enhancing the emergent reasoning capabilities of MLLMs. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
df58e2f5-341b-4311-8bd9-b47a53240ba1 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Control Signal Prediction
We next evaluate the accuracy of control signal predictions for Course (i.e., turning angle) and Speed. As indicated in Tab. II, our method surpasses others in open-loop control accuracy across various tolerance ranges and in terms of RMSE, significantly outperforming the baseline approach. In particular, when compared to the state-of-the-art DriveGPT4, which also uses the same visual input combined with past control signals for autoregressive prediction, our method stands out by implementing retrieval-augmented ICL examples. This indicates the analogy in the overall reasoning process provided by the ICL examples also contributes to the improvement in numerical control signal prediction. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4d1467ce-46da-4f0c-b39d-55e6a4481950 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## D. Ablation Study On Retrieval Strategy
We perform a more comprehensive ablation study to evaluate the efficacy of our proposed retrieval-augmented in-context learning. We first aim to investigate the similarity metric for retrieval. In particular, we compare the use of visual similarity
(i.e., video embeddings only) with hybrid similarity (i.e., hybrid video and control signal projected embedding Sec. III-C).
Our empirical findings indicate suboptimal performance when using visual similarity, possibly because it tends to prioritize ICL examples that are most perceptually similar, rather than effectively demonstrating the reasoning process. By finetuning the embeddings, we not only harness the potential of heterogeneous multi-modal sensor input but also enable more effective ICL example retrieval.
We also investigate whether to apply ICL examples during training or solely in inference time. As shown in Tab. III, we found that the MLLM is incapable of making reasonable predictions using ICL examples without prior training, regardless of the retrieval strategy chosen. This suggests that the pretrained MLLM is not equipped to effectively perform zero-shot ICL. We hypothesize that supervised fine-tuning plays a crucial role in enhancing the ICL capabilities of MLLM, necessitating the provision of reasoning demonstrations, which aligns with observations in [18]. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
28191f10-2bf1-40bd-ad19-8b175342fedd | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## E. Generalisation Capacity
One of the critical capacities of autonomous systems is to generalise to unseen environments out of its training distribution. However, in the domain of explainable driving, we observe the existing methods are unable to perform such generalisation, which poses challenges for their deployment. As shown in lower part of Tab. I, ADAPT and the base MLLM (i.e., training without ICL) reveal dramatic performance degradation to the in-distribution situation. However, our method leverages ICL learning examples to demonstrate a significant performance boost with a large margin. Note that even though the memory database is constructed with BDD-X, the RA-ICL can still perform generalisation in a zero-shot manner. This is potentially due to the robustness of the hybrid retrieval process, where samples with less distribution shift can still be selected to serve as effective ICL examples. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e178f548-363b-4114-b936-1fd5d35a5d5f | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## F. Qualitative Demonstration
We also demonstrate a series of quantitative examples comparing the driving action explanation and justification provided by human ground truth and prediction from our method. As shown in Fig. 6, we observe *RAG-Driver* produces robust intelligible action explanation and justification under different environments (i.e. night time and adversarial weather)
with a control signal close to the human driver record.
More importantly, in the out-of-distribution setting Spoken-
SAX as indicated by the clear visual difference, we observe the prediction also produces human-understandable answers, qualitatively validating the exceptional zero-shot generalisation capacity. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f9688efe-918a-49ec-bfab-7012b515360a | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
This work aims to develop a generalisable explainable driving commentator using a Machine Learning Language Model
(MLLM), addressing a significant obstacle that has hindered deployment: the poor generalisation capacity. However, several problems still need to be addressed. For instance, the prevalent issue of hallucination in MLLM, while mitigated, is still observed. We hypothesize that this is due to the limited general video understanding capacity, as the video encoder only processes 8 frames per video. Also, due to the limited access
| Method | Is Generalist? |
|---------------------------------------|------------------|
| Action | Justification |
| B4 | |
| ↑ | |
| C | |
| ↑ | |
| M | |
| ↑ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
06b62413-e05e-4459-b34d-1b33a1ab3cf3 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| |
| M | |
| ↑ | |
| B4 | |
| ↑ | |
| C | |
| ↑ | |
| M | |
| ↑ | |
| | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5d654151-d6e7-4518-8b01-2e39d16a3dca | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
|
| M | |
| ↑ | |
| In Distribution Performance (BDD-X) | |
| S2VT [ | 74 |
| × | |
| 30 | . |
| S2VT++ [ | 74 |
| × | |
| 27 | . |
| SAA [ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c3076fc6-0cc1-4f21-9cba-806c9a558e94 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| |
| 27 | . |
| SAA [ | 38 |
| × | |
| 31 | . |
| 7.1 | |
| 66 | . |
| WAA [ | 38 |
| × | |
| 32 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3390a9ac-b081-4042-a237-147ff00b5c3f | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
|
| WAA [ | 38 |
| × | |
| 32 | . |
| ADAPT [ | 33 |
| × | |
| 34 | . |
| DriveGPT4 [ | 78 |
| ✓ | |
| 30 | . |
| OURS | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4108bdda-4f3d-4597-9c50-5cf96e2bbe79 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
|
| ✓ | |
| 30 | . |
| OURS | |
| ✓ | |
| 34 | . |
| ∆ | |
| w.r.t generalist SOTA | + |
| 14 | . |
| + | |
| | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
bb8552ea-715a-4e4c-bc48-5e79ebe4a0df | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
|
| 14 | . |
| + | |
| 21 | . |
| + | |
| 3 | . |
| + | |
| 18 | . |
| + | |
| 6 | . | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1d9b2328-b577-4ea8-9020-4bdbebfb5780 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| . |
| + | |
| 6 | . |
| + | |
| 1 | . |
| Zero-shot Generalisation (Spoken-SAX) | |
| ADAPT [ | 33 |
| × | |
| 0 | . |
| Base w/o ICL | |
| | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c26d8270-2836-4905-9a37-f63b2a5d2ad5 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
|
| 0 | . |
| Base w/o ICL | |
| ✓ | |
| 0 | . |
| OURS w ICL | |
| ✓ | |
| 9 | . |
| ∆ | |
| w.r.t SOTA | + | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6a714d88-f2d8-4c27-b3bc-24c5c66b99bf | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| . |
| ∆ | |
| w.r.t SOTA | + |
| − | |
| % | |
| + | |
| 119 | . |
| + | |
| 3 | . |
| +80.6% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d41f7d38-cafd-4708-a967-6f995de28cb5 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| |
| 3 | . |
| +80.6% | + |
| 98 | . |
| + | |
| 17 | . |
Method
Course
Speed
RMSE(degree)↓
A0.1 ↑
A0.5 ↑
A1.0 ↑
A5.0 ↑
A10.0 ↑
RMSE(m/s)↓
A0.1 ↑
A0.5 ↑
A1.0 ↑
A5.0 ↑
A10.0 ↑
ADAPT [33]
5.87
54.49
86.39
91.06
97.36
98.20
2.68
11.77
31.79
47.48
92.75
95.87
DriveGPT4 [78]
4.57
69.22
79.14
84.47
95.72
96.74
1.09
56.93
77.77
87.97
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a4c94f02-363e-4ef3-ac77-4b2700c00054 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
A0.5 ↑
A1.0 ↑
A5.0 ↑
A10.0 ↑
ADAPT [33]
5.87
54.49
86.39
91.06
97.36
98.20
2.68
11.77
31.79
47.48
92.75
95.87
DriveGPT4 [78]
4.57
69.22
79.14
84.47
95.72
96.74
1.09
56.93
77.77
87.97
99.00
99.57
OURS
4.48
74.32
88.69
93.12
98.30
99.10
0.69
51.12
85.54
94.49
99.81
99.91
| | | | | | | | Retrieval Strategy | ICL Phase | BDD-X |
|---------------|---------------|----------|-----------|---------|--------|---------|----------------------|-------------|-------------|
| Visual Search | Hybrid Search | Training | Inference | Act. B4 | Act. C | Jus. B4 | Jus. C | Speed Err. | Course Err. |
| 29 | . | 8 | 222 | . | 1 | 5 | . | 9 | 58 |
| � | � | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2a65b580-97b7-48fd-9937-ebaa50921e62 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| 8 | 222 | . | 1 | 5 | . | 9 | 58 |
| � | � | | | | | | | | |
| 0 | . | 0 | 0 | . | 0 | 0 | . | 0 | 0 |
| � | � | | | | | | | | |
| 0 | . | 0 | 0 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fe5c3cb7-3602-492c-9a30-926b537043fc | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| | | | | | |
| 0 | . | 0 | 0 | . | 0 | 0 | . | 0 | 0 |
| � | � | � | | | | | | | |
| 31 | . | 2 | 222 | . | 1 | 7 | . | 7 | 83 |
| � | � | � | | | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4d8519c0-f34c-46de-8e0d-45e48c2a9861 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## V. Limitations And Future Work
| . | 7 | 83 |
| � | � | � | | | | | | | |
| 34 | . | 3 | 260 | . | 9 | 11 | . | 1 | 109 |
to open-source models and computational cost, we employ a relatively small MLLM with 7 billion parameters, which falls short of some state-of-the-art models (i.e. GPT4V [1], Gemini [70]). We anticipate that the development of a more powerful MLLM backbone with lower computational costs could further enhance the application of MLLM in driving. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dc3aaf74-2424-4638-b075-ed53c2ae8a86 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## Vi. Conclusion
We propose *RAG-Driver*, a Multi-Modal Large Language Model with Retrieval-augmented In-context Learning capacity designed for generalisable and explainable end-to-end driving. It exhibits strong capability in providing numerical control signals, along with explanations and justifications for driving actions. More importantly, it shows impressive zero-shot generalisation to unseen environments without the need for additional training. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2b5d8ed3-d19c-446b-92ee-b044dd3874bc | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## Appendix
A. Discussion and Limitation Scale of MLLM While our Multi-modal Large Language Model (MLLM) has exhibited impressive capabilities in visual reasoning and planning for driving tasks, it's worth noting that it comprises only 7 billion parameters. This size is relatively modest when compared to more well-known models such as GPT4-V [1] and Gemini [70], which boast a significantly larger parameter count and demonstrate superior, near-human levels of visual understanding and reasoning. In various related fields such as visual question answering, problem-solving, and interactive dialogue, a clear trend has been observed by researchers: the scale of the model's parameters and the breadth of training data sources are pivotal. Performance improvements are typically seen in tandem with model scaling. Based on this trend, we expect similar advancements in the realm of driving applications. A larger model could further enhance the capabilities of MLLM in driving scenarios.
Number of In-Context Learning Examples During training and inference, we provide 2 ICL examples for each query.
This is due to the limitation of the context window size of the LLM backbone of 2048. With the recent improvement in LLM context window size, we expect to see a more flexible adoption of ICL examples.
B. Training Details Embedding Projector We leverage a three-layer MLP as the embedding projector to fuse the video 1 × 1024 and control signal 1 × 28 embedding into a hybrid embedding.
The architecture of the lightweight projector is a four-layer MLP with GELU activation. It has an input dimension of 1052 and an output dimension of 1024. The margin used in triplet loss is 0.5. We train the model for 200 epochs with a learning rate of 1e − 5 with Adam optimiser.
MLLM Backbone We use a learning rate of 2e − 5 with a cosine scheduler. We use a batch size of 4 with a gradient accumulation step of 2 on 8 A100 GPUs, which leads to an effective training batch size of 128. We use a warm-up strategy in the first 5 epochs with a warm-up ratio of 0.03. We train the model for 2 epochs. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5cb48f63-a287-4579-8ab7-1819873b8fe1 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Baseline Details
In our comparison, we evaluate several baseline methods. The first, S2VT [74], utilises an end-to-end sequence-to-sequence model with Long Short-Term Memory (LSTM) networks. It is trained on paired video-sentence data, linking video frame sequences to corresponding word sequences, enabling it to generate descriptive captions for video events. The second method, WAA [38], employs a visual attention model that trains a convolutional network from images to vehicle control commands. This method focuses on identifying influential image regions through the controller's attention and uses an attention-based video-to-text model to produce textual explanations aligned with the controller's attention maps, grounding the explanations in relevant scene parts. The third approach, ADAPT [33], is a transformer-based method that leverages a multi-task joint training framework. It aligns driving action captioning with control signal prediction tasks. Finally, DriveGPT4 [78], trained using LLaVA [48] to generate a visual instruction tuning dataset derived from the BDD-X dataset [38], processes multimodal input data and is capable of generating text responses while predicting control signals, fine-tuned with the assistance of ChatGPT on the new dataset.
D. Linear Layer Parameter Update Derivations Consider a forward-pass through a linear layer F after its weights W0 have been updated by ∆W
y = F(x) = (W0 + ∆W)x = W0x + ∆Wx
The weight update itself is expressed as
∆W = η ∂L ∂y |yixT i ⇒ ∆Wx = η ∂L ∂y |yixT i x
where xi, yi are the input and output to the layer that resulted in the weight update. Now, if we in fact have optimised over a mini-batch of input-outputs xi, yi we have
i η ∂L i η ∂L ∂y |yixT i x ∆W = � ∂y |yixT i ⇒ ∆Wx = �
Therefore we have a weighted sum of dot products, which is akin to an attention mechanism. Indeed, from Eq. (6) we can apply *softmax-free* linear attention expressions such as in [40] for W | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
953a7b23-907a-41a4-ae12-1191c4078062 | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Baseline Details
the layer that resulted in the weight update. Now, if we in fact have optimised over a mini-batch of input-outputs xi, yi we have
i η ∂L i η ∂L ∂y |yixT i x ∆W = � ∂y |yixT i ⇒ ∆Wx = �
Therefore we have a weighted sum of dot products, which is akin to an attention mechanism. Indeed, from Eq. (6) we can apply *softmax-free* linear attention expressions such as in [40] for WV [zicl; zq] softmax
$$\left({\frac{\left(W_{K}\mathbf{z}_{1:n}\right)^{T}\left(W_{Q}\mathbf{z}_{1:n}\right)}{\sqrt{d^{i}}}}\right)$$
→ WV [zicl; zq] (WK [zicl; zq])T WQz1:n Multiplying the linear attention matrices through the stacked in-context and query embeddings we have WV zicl(WKzicl)T WQz1:n + WV zq(WKzq)T WQz1:n Now take out a common factor of WQz1:n for
(WV zicl(WKzicl)T + WV zq(WKzq)T )WQz1:n
and put ∆WICL
=
WV zicl(WKzicl)T
and WZSL
=
WV zq(WKzq)T as the terms both pre-multiplying WQz1:n.
Note that WZSL is independent of the in-context terms
(depending only on the query). Now, in ∆WICL we in fact
have a set of in-context retrieved samples zicl,i such that
$$W_{V}\left[\boldsymbol{z}_{icl,0},\boldsymbol{z}_{icl,1},\ldots\right]\left(W_{K}\left[\boldsymbol{z}_{icl,0 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a8a387f2-6d44-4733-9a2e-38038676044b | # Rag-Driver: Generalisable Driving Explanations With Retrieval-Augmented In-Context Learning In Multi-Modal Large Language Model
## C. Baseline Details
the terms both pre-multiplying WQz1:n.
Note that WZSL is independent of the in-context terms
(depending only on the query). Now, in ∆WICL we in fact
have a set of in-context retrieved samples zicl,i such that
$$W_{V}\left[\boldsymbol{z}_{icl,0},\boldsymbol{z}_{icl,1},\ldots\right]\left(W_{K}\left[\boldsymbol{z}_{icl,0},\boldsymbol{z}_{icl,1},\ldots\right]\right)^{T}$$ $$\rightarrow\Delta W_{ICL}=\sum_{i}\ W_{V}\boldsymbol{z}_{icl,i}(W_{K}\boldsymbol{z}_{icl,i})^{T}$$
Finally, by inspection of similar dot-product expressions $W_{V}\boldsymbol{z}_{icl,i}(W_{K}\boldsymbol{z}_{icl,i})^{T}\leftrightarrow\eta\frac{\partial L}{\partial\boldsymbol{y}}|_{\boldsymbol{y}_{i}}\boldsymbol{x}_{i}^{T}\boldsymbol{x}$ we note that we match the form for the linear layer above.
∂y |yixT
i x we note that we match the form for the linear layer above
$$(W_{ZSL}+\Delta W_{ICL})W_{Q}\mathbf{z}_{1:n}\leftrightarrow(W+\Delta W)\mathbf{x}$$
This can therefore be interpreted to say that the output of the attention is adjusted in a meta-optimal way to conform to the samples provided as input context, much like gradient descent on the linear layer would adjust that layer to conform to the mini-batch training data, but _crucially_ in the case of _RAG-Driver_, without supervision. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10828v1.md",
"file_path": "paper_data/2402.10828v1.md",
"file_size": 64885,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
61283a25-c49b-41c9-b543-a39f62acab7d | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
Ziru Chen1, Michael White1, Raymond Mooney2, Ali Payani3, Yu Su1**, Huan Sun**1
1The Ohio State University
2The University of Texas at Austin
3Cisco Research
{chen.8336, white.1240, su.809, sun.397}@osu.edu mooney@cs.utexas.edu apayani@cisco.com | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dd18c4d9-d660-4fa1-8ed1-cb7b146922f7 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Abstract
In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, a discriminator, and a planning method. We investigate the practical utility of two advanced planning methods, iterative correction and tree search. We present a comprehensive analysis of how discrimination accuracy affects the overall performance of agents when using these two methods or a simpler method, re-ranking. Experiments on two tasks, text-to-SQL parsing and mathematical reasoning, show that: (1) advanced planning methods demand discriminators with at least 90% accuracy to achieve significant improvements over re-ranking; (2) current LLMs' discrimination abilities have not met the needs of advanced planning methods to achieve such improvements; (3) with LLM-based discriminators, advanced planning methods may not adequately balance accuracy and efficiency. For example, compared to the other two methods, tree search is at least 10–20 times slower but leads to negligible performance gains, which hinders its real-world applications.1 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
18e72280-da5f-451d-83d7-4d8f342314ef | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 1 Introduction
Planning plays a crucial role in intelligent behaviors of human and AI agents. Since the early stage of AI research, various methods have been proposed to build agents that can plan efficiently and accurately (Newell and Simon, 1956; Russell and Norvig, 2010). The problem-solving procedure in these AI agents usually involves three steps: searching for possible action sequences, predicting their expected outcomes with an internal world model, and finding an action sequence to achieve the best expected outcome (Russell and Norvig, 2010; Mattar and Lengyel, 2022). This procedure shares common traits with how large language models (LLMs) solve multi-step tasks, including mathematical reasoning (Wei et al., 2022), multi-hop question answering (Yao et al., 2023b), and code generation (Yang et al., 2023). At each step, an LLM searches for possible next actions and generates their language representations (*generation*).
To evaluate the actions, the LLM utilizes itself or another LLM to predict the outcomes of actions, in the form of rewards or correctness (*discrimination*).
Afterwards, it incorporates the outcomes into its problem-solving process with some strategy to find the best action sequence (*planning*).
Motivated by the similarity, we critically examine how LLMs solve multi-step tasks from a language-agent view. We unify different problemsolving procedures of LLMs into an agent framework (Figure 1) consisting of a generator, a discriminator, and a planning method. Under this framework, we investigate the practical utility of more advanced planning methods, such as tree search, in comparison with simpler methods (e.g. re-ranking).
We hypothesize that the discriminator may be a deciding factor and systematically investigate two research questions: **(RQ1)** How does discrimination accuracy affect the performance of language agents using different planning methods? (RQ2)
Can LLM-based discriminators correctly assess language agents' actions in practical settings?
To this end, we analyze LLMs' discrimination abilities and their impact on three categories of planning methods: re-ranking, iterative correction, and tree search.
We comprehensively evaluate these methods on two real-world tasks, text-to-SQL
parsing and mathematical reasoning, with opensource, closed-source, and fine-tuned LLM discriminators. First, we use oracle environmental information to simulate discriminators | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9a2f8070-708f-4173-b1cc-bedd49d0e159 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 1 Introduction
the performance of language agents using different planning methods? (RQ2)
Can LLM-based discriminators correctly assess language agents' actions in practical settings?
To this end, we analyze LLMs' discrimination abilities and their impact on three categories of planning methods: re-ranking, iterative correction, and tree search.
We comprehensively evaluate these methods on two real-world tasks, text-to-SQL
parsing and mathematical reasoning, with opensource, closed-source, and fine-tuned LLM discriminators. First, we use oracle environmental information to simulate discriminators with different levels of accuracy. The simulation experiments exhibit a strong correlation between discrimination accuracy and overall task performance among all three types of planning methods. Then, in a non-oracle setting, we closely investigate the LLM-based discriminators and show how environmental observations can effectively improve them. Finally, we conduct endto-end evaluations of the discriminators and planning methods to verify and strengthen our findings. In summary, our experiments show that:
(1) Advanced planning methods, i.e., iterative correction and tree search, demand highly accurate discriminators (≥ 90% accuracy) to achieve decent improvements over the simpler method, re-ranking.
(2) Using environmental feedback, we improve the discrimination accuracy of LLMs by up to 30.2 and 8.4 absolute points on text-to-SQL parsing and mathematical reasoning, respectively. Yet, our endto-end evaluations suggest they have barely met the need for advanced planning methods to show significant improvements over re-ranking.
(3) Meanwhile, advanced planning methods may not adequately balance accuracy and efficiency when using LLM-based discriminators.
In our experiments, compared to the other two methods, tree search is at least 10–20 times slower but leads to negligible performance gains. This accuracyefficiency trade-off can impede the deployment of tree search in real-world applications. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e54d7ca9-5b5e-4a18-b2d8-8f67d4cdf6d0 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 2 Related Work
A lot of recent research efforts have focused on advanced planning methods for improving the multistep problem-solving abilities of LLMs (Li et al. 2023b; Madaan et al. 2023; Wang et al. 2023b; Yao et al. 2023a,b; Zhou et al. 2023, *inter alia*). Despite different designs, all these methods use a discriminator to evaluate the agents' actions, or planning steps. In fact, instead of planning methods, an agent's discriminator could be the more critical component. Since incorrect outcome predictions could lead to suboptimal plans, discriminators may decide the performance of an agent, regardless of its planning method (Mattar and Lengyel, 2022).
While it is commonly believed that discrimination is easier than generation for human and AI agents (Gu et al., 2023), West et al. (2024) pose the hypothesis that state-of-the-art generative AI models, including LLMs, may not have discrimination abilities matching their generation abilities. This hypothesis coincides with the findings of Huang et al. (2024) and Wang et al. (2023a) that, without any external feedback or with obviously absurd feedback, LLMs may recognize some of their selfgenerated correct plans as wrong. Huang et al.
(2024) also note that the performance gains of selfcorrection, a kind of iterative correction method, may rely on some high-quality external feedback, such as checking ground-truth labels or test sets for planning loop termination. However, such external feedback usually does not exist in practical applications because solutions to new problems are unknown, and annotating comprehensive test cases can be nontrivial and costly.
Distinct from these existing studies, our work focuses on studying the relationship between discriminators and planning methods, including but not limited to self-correction, and attempts to improve LLMs' discrimination capability. Our findings can provide useful guidelines for choosing planning methods and implementing language agents in practice. In light of our findings, we encourage future research to thoroughly evaluate language agents with various practical, non-oracle discriminators.
We also advocate that improving LLM-based discriminators is an important future direction to enhance agents' accuracy and efficiency when using advanced planning methods. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
73fd7d0e-9316-4220-ba49-576854c391f0 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 3 Our Framework
As shown in Figure 1, we systematically analyze different planning methods in a unified generatordiscriminator framework. Our framework consists of a generator that proposes (partial) action sequences, a discriminator that evaluates the outcomes of these actions, and a planning method that ranks the actions according to their outcomes and manages the interaction between the two models. In this section, we describe each of the three components and how they are instantiated on text-to-SQL parsing and mathematical reasoning (Section 4.1). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5a16098d-b5ef-4177-b345-dc64c3ef229e | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 3.1 Generator
For each planning step, we prompt the generator to sample action sequences (SQL queries or Python
| (a) Re-ranking. | (b) Iterative Correction. | (c) Tree Search. |
|-------------------|-----------------------------|--------------------|
programs for math reasoning). For text-to-SQL parsing, we use 1-shot prompting, where the example is retrieved from the training sets using BM25
(Robertson and Zaragoza, 2009). For math reasoning, we use a fixed 2-shot prompt adapted from Ni et al. (2023b). See prompts in Appendix C. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5b08c471-c91c-4348-9132-d2167399ca04 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 3.2 Discriminator
Given some (partial) action sequences, we formulate the discrimination task as binary question answering (Kadavath et al., 2022; Ke et al., 2023).
The discrimination score of each tested example is the probability of "Yes" being generated as the next token. Specifically, we prompt the LLMs with the question "Is the SQL/python program correct given the utterance/problem?" to generate one single token with its probability as the score. With this formulation, we evaluate three types of LLMs in our experiments (Section 4.2). Similar to the generator, we use 1-shot prompting with BM25 retrieval for text-to-SQL parsing and a fixed 2-shot prompt for math reasoning. Details are in Appendix A. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c6c15b72-25ff-4f29-8d08-0483db49751b | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 3.3 Planning Methods
Re-ranking. Re-ranking is a straightforward planning method. After sampling a few complete action sequences from the generator, it uses the discriminator to score them and return the highest-scoring plan (Figure 2a). Although simple, it is commonly used for code generation (Ni et al., 2023a) and mathematical reasoning tasks (Wang et al., 2023b; Li et al., 2023b). We consider re-ranking as a baseline planning method for more advanced ones.
Iterative correction. Like re-ranking, iterative correction starts with the generator proposing a complete action sequence. Then it leverages multiple rounds of revision to improve the initial plan based on the discriminator's feedback (Figure 2b). When the generator and the discriminator are the same LLM, it becomes a prevalent planning method, selfcorrection (Madaan et al., 2023; Shinn et al., 2023;
Yao et al., 2023b; Chen et al., 2024).
While some work uses greedy generation, our implementation samples the same number of action sequences as other planning methods for fair comparison. Then, it uses the discriminator to select the best-scoring one for the next round's revision.
We allow up to 10 rounds of corrections, with early exiting when the best plan meets a threshold of discrimination score (> 0.99), or the score is not improved for 3 consecutive iterations. For fair comparison, we prompt the generator to revise plans with 0-shot instruction following (Appendix C) instead of few-shot, since in-context examples may introduce additional information.
Tree Search. Tree search is another popular planning method for language agents, such as Monte- Carlo Tree Search (Chaffin et al., 2022), Pangu
(Gu et al., 2023), Tree of Thoughts (Yao et al.,
2023a), and Language Agent Tree Search (Zhou et al., 2023). It uses a memory structure (e.g., a heap) to store observed partial action sequences and their scores. For each iteration, it prompts the generator for possible next steps for the current best partial plan, calls the discriminator to evaluate the steps, and updates the memory with new plans and scores (Figure 2c). Our tree search implementation is a kind of MCTS (Zhang et al., 2023):
(1) *Selection*: Find the highest scoring partial plan in the memory, implemented as a | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d22f6a32-254f-4924-aa1e-4c41702f66c2 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 3.3 Planning Methods
et al.,
2023a), and Language Agent Tree Search (Zhou et al., 2023). It uses a memory structure (e.g., a heap) to store observed partial action sequences and their scores. For each iteration, it prompts the generator for possible next steps for the current best partial plan, calls the discriminator to evaluate the steps, and updates the memory with new plans and scores (Figure 2c). Our tree search implementation is a kind of MCTS (Zhang et al., 2023):
(1) *Selection*: Find the highest scoring partial plan in the memory, implemented as a heap structure.
(2) *Expansion*: Prompt the generator for the next step of this partial plan. We follow recent work to define a step to be a SQL clause (Chen et al., 2023c) or one line of Python code (Bui et al., 2022), which is semantically more meaningful.
(3) *Simulation*: Reuse the generator to complete the partial plans as Monte-Carlo simulations.
(4) *Evaluation*: Evaluate the simulations with the discriminator. The score for each new step is the maximum score of all simulations starting from it.
(5) *Backpropagation*: Update the partial plan with the new step and score (if higher) and insert them into the heap memory. After the update, if there is a complete plan in the heap memory, we terminate the tree search and return this plan. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
df384d3d-a220-4949-ae96-d76e274f5bb1 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 4 Experimental Setup 4.1 Tasks And Datasets
Text-to-SQL Parsing. Text-to-SQL parsing is a code generation task of mapping natural language utterances to SQL queries. It requires agents to ground utterances to database environment and generate multi-step plans as SQL queries, making it an appropriate testbed in our study. To evaluate language agents' potential for text-to-SQL parsing, we adapt two widely used datasets, Spider (Yu et al., 2018) and Bird (Li et al., 2023a).
We use the entire training split in each dataset to prompt or fine-tune LLMs.2 For evaluation, due to resource and budget constraints, we randomly select 400 and 300 development set examples in Spider and Bird, respectively. We also note that model performance may be lower on our evaluation sets because we uniformly sampled examples from each difficulty level, while the original development sets have skewed distributions towards easier examples (Appendix A.1).
Mathematical Reasoning. Mathematical reasoning is a common task for evaluating language agents' multi-step reasoning and planning capabilities. With 500 random examples from GSM8K's development set (Cobbe et al., 2021), we follow program of thoughts (Chen et al., 2023b) to test the agents' ability to plan in Python programs and solve these grade school math word problems. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
491860b5-47f4-49d9-a16b-6d4c98cf4bc0 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 4.2 Models
In all experiments, we use CodeLlama-13B- Instruct as the generator in our framework. We also evaluate three kinds of LLMs as the discriminator:
(1) *open-source LLMs*: CodeLlama-7B-Instruct and CodeLlama-13B-Instruct (Rozière et al., 2024),
(2) *closed-source LLMs*: GPT-3.5-Turbo (OpenAI,
2022) and GPT-4-Turbo (OpenAI, 2023), and (3)
fine-tuned LLMs: CodeLlama-7B-Instruct-FT and CodeLlama-13B-Instruct-FT. Their implementation details are in Appendix A.3. For brevity, we will omit "Instruct" in model names. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6947d985-a3b5-4c70-943b-2cce356fcf93 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 4.3 Evaluation
Intrinsic Evaluation. We measure the discrimination abilities of LLMs with four intrinsic metrics.
(1) Discrimination accuracy (Acc): Given a pair of correct and wrong programs, we calculate the percentage where the correct program obtains a higher discrimination score than the wrong one
(Bai et al., 2022; Touvron et al., 2023). **(2)** Classification macro F1 (F1): We treat "correct" and
"wrong" as two classes and compute the macro average of F1 scores on these two labels. **(3)** Hit@1
(H@1): Given a batch of candidate programs, we calculate the percentage where the highest scoring candidate is correct. **(4)** Mean reciprocal rank
(MRR): We compute the standard MRR score by the highest-ranking correct program in the batches.
End-to-End Evaluation. To show the impact of discriminators, we evaluate language agents' endto-end performance using our three planning methods, with execution accuracy for text-to-SQL parsing and answer accuracy for math reasoning. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2962f4a2-db19-4246-8fc3-5b534c812ca8 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 5 Simulation Experiments With Oracle 5.1 Oracle-Based Discriminator
To investigate how discrimination accuracy affects the overall performance of language agents using different planning methods (RQ1), we utilize oracle environmental feedback to simulate a discriminator with controllable accuracy. For text-to-SQL parsing, we compare the first five rows in the execution results of predicted and gold SQL queries and calculate their table cell overlaps (Appendix A.4). For mathematical reasoning, we compare the predicted Python programs' answers with the ground truth.
We use a probability-based threshold τ to control the accuracy of each simulated discriminator
(Gao et al., 2022). When evaluating each plan, the discriminator first computes a score s with oracle information. Then, it uses a random function to generate a number p ∈ [0, 1). If *p < τ*, the discriminator returns the score s. Otherwise, it returns an inverted score 1 − s. In this way, we ensure that the discriminator's accuracy is at most τ. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
287d1329-8a0c-46b8-b93a-372fb7c300fd | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 5.2 Results And Analysis
As shown in Figure 3, discrimination accuracy closely correlates with the performance of agents on all three datasets, no matter which planning method is used. For instance, the performance of re-ranking agents improves linearly as we increase the discrimination accuracy threshold, setting up a strong baseline for agents using other planning methods. We also note that it takes around 80% discrimination accuracy for all agents to outperform greedy generation on text-to-SQL parsing, demonstrating the task's difficulty. To answer *RQ1*, we further analyze the performance of agents using remains a key problem for AI agents.
Monte-Carlo tree search can be unstable, especially in the early stages. We observe that iterative correction outperforms tree search on Bird
(Figure 3b) when the accuracy threshold is 1.0.
This observation may be caused by the instability of Monte-Carlo tree search. We first note that Mc- Nemar's test finds no difference between iterative correction and tree search (p > 0.05), despite their performance gap (29.3 vs 32.7). The rationales are discussed in Appendix B. Furthermore, we analyze all 25 examples of which iterative correction derives the correct answer but tree search fails. In
12 out of the 25 examples (48%), tree search fails to select the correct partial plan when the discrimination scores are the same. Especially, this can happen in the early stages of tree search, where a correct program has not yet been discovered and all the steps receive a score of 0 from the oracle discriminator. Thus, we consider this underperformance a consequence of search instability. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6dcb7d6e-5544-47e6-b11f-94d265454506 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 6 Llm-Based Discriminators
iterative correction and tree search as follows:
Advanced planning methods demand highly accurate discriminators. For iterative correction agents, their performance usually cannot distinguish from the re-ranking baselines until we maximize the threshold τ = 1.0 (Figure 3). This finding resonates with Huang et al. (2024) that high-quality feedback may be the key to the success of iterative correction. More interestingly, tree search agents consistently underperform the other two when the discrimination accuracy threshold τ ≤ 0.8. Moreover, when raising the threshold to 0.9, we observe a sharp increase of their performance, with which they start to beat other kinds of agents.
Advanced planning methods may not adequately balance accuracy and efficiency. By calculating the average inference time per example
(Figure 3), we find that our implementation of tree search is at least 10–20 times slower than the other two planning methods, mainly due to frequent generation of Monte-Carlo simulations (Zhang et al., 2023). While we can remove the simulations to be more efficient and evaluate partial plans, in our preliminary study, we find LLMs would struggle in this setting. This accuracy-efficiency trade-off may hinder real-world applications of tree search methods. Meanwhile, the inference time for iterative correction increases as the accuracy threshold is raised, suggesting more iterations are required to derive a correct answer. This indicates that developing efficient and accurate planning methods While we have shown that iterative correction and tree search work well with oracle discriminators, it remains unclear whether LLM-based discriminators can correctly assess language agents' actions
(RQ2). To answer this question, we leverage generator outputs in the simulation experiments and re-label them with ground-truths to evaluate the LLMs' discrimination accuracy (Appendix A.2).
Models
Spider
Bird
GSM8K‡
Acc
F1
H@1
MRR
Acc
F1
H@1
MRR
Acc
F1
H@1
MRR
CodeLlama-7B
54.0
37.1
56.0
62.3
44.6
46.7
13.0
18.0
48.6
38.7
36.2
46.9
CodeLlama-13B
58.2
37.1
57.0
63.1
49.4
46.7
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4a2c1a24-3638-4cc1-9db6-68993ae66fde | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 6 Llm-Based Discriminators
K‡
Acc
F1
H@1
MRR
Acc
F1
H@1
MRR
Acc
F1
H@1
MRR
CodeLlama-7B
54.0
37.1
56.0
62.3
44.6
46.7
13.0
18.0
48.6
38.7
36.2
46.9
CodeLlama-13B
58.2
37.1
57.0
63.1
49.4
46.7
12.7
18.3
62.2
38.7
41.8
51.0
CodeLlama-7B-FT
62.4
60.3
59.5
64.6
52.4
46.7
14.3
19.1
-
-
-
-
CodeLlama-13B-FT
69.7
67.2
61.3
65.7
62.1
46.7
16.0
20.5
-
-
-
-
GPT-3.5-Turbo
67.0
47.3
59.0
64.3
64.3
35.7
16.0
20.5
72.1
49.1
46.6
54.0
GPT-4-Turbo
76.5
54.9
63.0
66.7
76.2
50.1
20.3
23.0
93.8
91.1
59.8
61.6
| | | | | | CodeLlama-13B | GPT-3.5-Turbo | CodeLlama-13B-FT |
|-----------------------|------|-------|--------|------|-----------------|-----------------|--------------------|
| Spider | Bird | GSM8K | Spider | Bird | GSM8K | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dbb835f4-67d0-4f46-8160-c886cc4eded1 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 6 Llm-Based Discriminators
| | | | | CodeLlama-13B | GPT-3.5-Turbo | CodeLlama-13B-FT |
|-----------------------|------|-------|--------|------|-----------------|-----------------|--------------------|
| Spider | Bird | GSM8K | Spider | Bird | GSM8K | Spider | Bird |
| Naive Discriminator | 58.2 | 49.4 | 62.2 | 67.0 | 64.3 | 72.1 | 69.7 |
| + Executability Check | 78.7 | 78.8 | 64.5 | 84.8 | 86.3 | 73.2 | 83.6 |
| ++ Execution Result | | | | | | | |
| 83.6 | 79.6 | 70.6 | 90.0 | 89.2 | 76.5 | 88.5 | 85.1 |
| Improvement | 25.4 | 30.2 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cc89dd28-f0f4-4a33-b140-ce0904003b39 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 6 Llm-Based Discriminators
|
| 83.6 | 79.6 | 70.6 | 90.0 | 89.2 | 76.5 | 88.5 | 85.1 |
| Improvement | 25.4 | 30.2 | 8.4 | 23.0 | 24.9 | 4.4 | 18.8 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
267e601c-a834-4422-85c0-7b8bccfe55ae | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 6.1 Naive Discriminators
As Table 1 shows, most open-source LLMs have mediocre discrimination abilities. After fine-tuning, CodeLlama-13B-FT could reach the same level of performance as GPT-3.5. In comparison, closedsource LLMs exhibit stronger discrimination abilities, with GPT-4 achieving the best performance across all three datasets. Although GPT-4 has 93.8 discrimination accuracy on GSM8K and is also better than GPT-3.5 on text-to-SQL parsing, due to its high cost, we will use GPT-3.5 in our experiments. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2a5368ab-227e-4c36-a035-1a23f4eb2f3c | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 6.2 Observation-Enhanced Discriminators
To improve LLMs' discrimination abilities, we conduct an error analysis for CodeLlama-13B on its worst-performing intrinsic evaluation set, Bird. We sample 50 pairs of SQL queries from the Bird intrinsic evaluation set with incorrect predictions. In 25 of the 50 pairs (50%), CodeLlama-13B assigns a higher score to non-executable SQL queries. Consequently, no matter using which planning method, language agents could hardly perform well with such discriminators.
Motivated by our error analysis, we first propose to add a program executability check as a safeguard for LLMs. If a program is non-executable, our discriminator would discard LLMs' score and return 0. Otherwise, it returns the original LLM score. Besides executability check, we incorporate the execution results of predicted programs (first 5 table rows of SQL queries or answer of Python program) into the in-context examples and fine-tuning data
(Ni et al., 2023a). If a program is non-executable, we use ERROR to represent its execution result.
Evaluation results (Table 2) show that these two non-oracle environmental observations can effectively improve LLMs' discrimination accuracy. Enhanced with environmental observations, CodeLlama-13B can obtain up to 25.4, 30.2, and 8.4 points absolute accuracy gain on Spider, Bird, and GSM8K, respectively. For the other two models, we also observe significant gains compared to the naive discriminator baseline. Such notable improvements also highlight the importance of filtering out non-executable programs, or invalid plans, during planning. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1762c9bd-211c-41ba-b49f-3c9a3428cfaf | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7 End-To-End Evaluation
While we have evaluated their discrimination abilities with a fixed test set, to answer *RQ2*, we wonder if LLMs can correctly assess constantly changing sets of programs in actual planning processes. To this end, we evaluate the end-to-end performance of language agents with LLM-based discriminators and the three planning methods. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
20d6fe4d-909e-4d01-86f3-ca999f383ba1 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.1 Text-To-Sql Parsing
As shown in Table 3, agents using naive LLM-
based discriminators do not perform well on textto-SQL parsing. On Spider, the re-ranking agent using CodeLlama-13B-FT has the best accuracy
(61.5), which is still lower than greedy generation (62.3) that requires no planning and is more efficient. On Bird, GPT-3.5-Turbo and re-ranking
Discriminators
Spider (Greedy Gen = 62.3)
Bird (Greedy Gen = 16.0)
Re-ranking
Iter. Correct.
Tree Search
Re-ranking
Iter. Correct.
Tree Search
CodeLlama-13B
57.5
51.7
55.5
13.3
13.3
13.3
GPT-3.5-Turbo
58.3
52.7
56.2
18.0
17.3
14.0
CodeLlama-13B-FT
61.5
51.7
56.0
14.3
13.0
13.0
CodeLlama-13BE
65.5
62.0
62.5
21.0
24.3
22.7
GPT-3.5-TurboE
67.0
67.5
66.0
22.3
25.0
22.7
CodeLlama-13B-FTE
70.3
68.0
67.5
23.7
26.3
21.7
Oracle Simulation (τ = 1.0)
71.0
76.0∗
76.2∗
27.0
32.7∗
29.3
show an accuracy of 18.0, which is slightly higher than greedy generation (16.0). In addition to the mediocre performance, we find that when using naive discriminators, iterative correction and tree search consistently show worse or the same performance as re-ranking. These results mostly agree with our findings in previous experiments that (1)
advanced planning methods need strong discriminators, and **(2)** naive LLM-based discriminators are not accurate enough.
After enhancing the discriminators with two environmental observations (Section 6.2), we effectively improve the agents' performance without any | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
98b211e0-bda6-4571-b179-1e04a2ec35b7 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.1 Text-To-Sql Parsing
.3
show an accuracy of 18.0, which is slightly higher than greedy generation (16.0). In addition to the mediocre performance, we find that when using naive discriminators, iterative correction and tree search consistently show worse or the same performance as re-ranking. These results mostly agree with our findings in previous experiments that (1)
advanced planning methods need strong discriminators, and **(2)** naive LLM-based discriminators are not accurate enough.
After enhancing the discriminators with two environmental observations (Section 6.2), we effectively improve the agents' performance without any modifications to the generator or the planning methods. In 5 of the 6 experiments, CodeLlama-13B-
FTE results in the best execution accuracy among all discriminators. It also leads to the overall best performance on Spider with re-ranking (70.3) and on Bird with iterative correction (26.3), showing the effectiveness of fine-tuning LLMs for discrimination and using environmental observations. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0f4fbe8e-bade-47cd-9ac2-d0e7beccb27b | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.2 Mathematical Reasoning
The most interesting result in mathematical reasoning evaluation (Table 4) is the failure of iterative correction with naive discriminators. When prompting the generator CodeLlama-13B for 0- shot correction, it would disregard the instruction to "generate a fixed python program" (Appendix C), copy the program to be modified, and generate explanations and correction steps in natural language. Such natural language steps, usually having some lexical overlap with the math problem, would increase the discrimination score of LLMs while being non-executable. As a result, our iterative correction agent only has 10.2 answer accuracy when using CodeLlama-13B to evaluate its own generation. While this issue also exists when us-
| Discriminators | Re-ranking Iter. Correct. Tree Search |
|-------------------|-----------------------------------------|
| ‡ | |
| CodeLlama-13B | 39.7 |
| 41.0 | |
| GPT-3.5-Turbo | 47.0 |
| 50.0 | |
| CodeLlama-13B | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ec48c217-a813-4530-b24a-726e36312bf3 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.2 Mathematical Reasoning
|
| 50.0 | |
| CodeLlama-13B | |
| E | |
| 42.8 | 42.2 |
| 46.0 | |
| GPT-3.5-Turbo | |
| E | |
| 47.6 | 48.4 |
| 51.0 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d0321866-04c0-4469-a9ef-bf3d33602569 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.2 Mathematical Reasoning
| |
| 47.6 | 48.4 |
| 51.0 | |
| Oracle Simulation | |
| 64.1 | 66.0 |
| ( | |
| τ | = 1 |
| ) | |
ing GPT-3.5-Turbo as the discriminator, it is less severe because GPT would sometimes assign a high score (> 0.99) to the initial Python program.
These scores trigger an early exit condition in iterative correction ( | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
61019431-fbbd-42d5-8fb7-d8563b80f43a | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.2 Mathematical Reasoning
|
| ) | |
ing GPT-3.5-Turbo as the discriminator, it is less severe because GPT would sometimes assign a high score (> 0.99) to the initial Python program.
These scores trigger an early exit condition in iterative correction (Section 3.3) and stop the agent from calling the generator to add any natural language, thus avoiding the issue. These findings echo related analysis on self-correction (Stechly et al., 2023; Valmeekam et al., 2023; Huang et al., 2024).
With an executability check, enhanced discriminators help mitigate this issue in iterative correction, which now achieves better performance (42.2 and 48.4) than greedy generation (39.4). Overall, the tree search agent using GPT-3.5-TurboE achieves the best answer accuracy. Nevertheless, McNemar's test finds no difference (p > 0.05) between the performance of re-ranking (47.6) and that of iterative correction (48.4) or tree search (51.0). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0e6007b4-e302-4d18-a9ef-3e48b3ea5430 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.3 Analysis
To better understand the end-to-end evaluation results, we conduct an in-depth analysis of examples where re-ranking returns the correct program, but
Error Type
Spider
Bird
GSM8K
Iter. Correct.
Tree Search
Iter. Correct.
Tree Search
Iter. Correct.
Tree Search
Discrimination
29 (78.4%)
17 (60.7%)
9 (52.9%)
12 (50.0%)
30 (62.5%)
6 (66.7%)
Exploration
8 (21.6%)
11 (39.3%)
8 (47.1%)
12 (50.0%)
18 (37.5%)
3 (33.3%)
Total
37
28
17
24
48
9
iterative correction or tree search does not (Table 5). Specifically, we analyze cases of the strongest discriminators, CodeLlama-13B-FTE for text-to- SQL parsing and GPT-3.5-TurboE for mathematical reasoning, and divide them into two kinds of errors. **(1)** *Discrimination error*: The discriminator assigns a higher score for wrong programs than correct ones, which is not recoverable by any planning method. **(2)** *Exploration error*: The planning method has not found the correct program before termination. Our analysis suggests that:
LLM-based discriminators have not yet met the needs of advanced planning methods. Across all datasets, 50% or more discrimination errors are observed in each planning method. On Spider, the number of such errors in iterative correction is as large as 29 out of 37 (78.4%). In fact, among the 29 errors, iterative correction has already found the correct SQL queries for 15 (40.5% of the total 37 errors) of them. However, not only does the discriminator fail to trigger early exits, but it also assigns a higher score for wrong SQL queries in new iterations. Consequently, these erroneous SQL queries override the originally correct ones, leading to an overall performance drop. The same issue is also serious in tree search. When an incorrect partial program receives a high discrimination score, tree search will commit to it and hardly explore other possibilities, including the correct partial programs. Such discrimination errors usually cannot be recovered by the planning methods themselves, unless they find another correct program with even higher scores. This finding also demonstrates that determining | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
71b4ab26-4783-4fad-9a42-79c97dceb370 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 7.3 Analysis
5% of the total 37 errors) of them. However, not only does the discriminator fail to trigger early exits, but it also assigns a higher score for wrong SQL queries in new iterations. Consequently, these erroneous SQL queries override the originally correct ones, leading to an overall performance drop. The same issue is also serious in tree search. When an incorrect partial program receives a high discrimination score, tree search will commit to it and hardly explore other possibilities, including the correct partial programs. Such discrimination errors usually cannot be recovered by the planning methods themselves, unless they find another correct program with even higher scores. This finding also demonstrates that determining early exits using oracle information in iterative correction may introduce a larger benefit than previously thought (Huang et al., 2024).
Advanced planning methods need more thorough exploration. For the remaining cases, we observe that advanced planning methods have not found a correct program before terminating, which we call exploration errors. This kind of error circles our discussion back to the accuracy-efficiency trade-off mentioned in our simulation experiments with oracle (Section 5.2). Indeed, we can extend the exploration of planning methods in various ways, such as loosening termination conditions, increasing the number of generation samples for each step, and adjusting some hyperparameters for more diverse program samples. Yet, all these adjustments can slow down the planning methods and reduce the language agents' efficiency. Additionally, we note that these strategies may not always result in better performance, as the discriminators may give unseen wrong programs a higher score.
For these reasons, iterative correction and tree search cannot gain decent improvement over reranking with the same LLM-based discriminator. On text-to-SQL parsing, tree search even shows worse performance than re-ranking when using CodeLlama-13B-FTE (Table 3: 67.5 vs 70.3 on Spider; 21.7 vs 23.7 on Bird). More surprisingly, on GSM8K, advanced planning methods may not perform much better than re-ranking even with the oracle discriminator (p > 0.05; McNemar's). Admittedly, some of the performance gains appear considerable, but McNemar's tells us there are still decent chances of the simpler agent outperforming a more complex one (Appendix B). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b472e7bd-a845-409b-b7dc-4cdbdcd15417 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## 8 Conclusions
This paper presents a thorough investigation into the relationship between discrimination accuracy and performance of planning methods in language agents. Through comprehensive experiments on text-to-SQL parsing and mathematical reasoning, we find that: Discrimination accuracy strongly correlates with the overall performance of language agents using different planning methods and also affects their efficiency (*answer to RQ1*). LLM-based discriminators can correctly assess a decent number of language agents' actions with their environmental observations, but they are still not accurate enough for advanced planning methods (answer to RQ2). Future research should investigate the development of more accurate discrimination models for language agents, e.g. by improving their grounded understanding of execution results beyond error signals. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d342a4fc-7243-41e2-8dc6-b80bebe98966 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Limitations
Experiments with Other Models. In this study, we focus on studying the generation and discrimination of instruction-tuned LLMs that have seen code data during pre-training. This consideration is because: **(a)** They may have better in-context learning performance on our two tasks, text-to-SQL parsing and mathematical reasoning with programof-thought (Ni et al., 2023b); **(b)** We want to leverage their 0-shot instruction following capabilities in iterative correction for fair comparisons with other planning methods; **(c)** For GSM8K problems, LLMs tend to generate natural language plans instead of programs with 2-shot prompting, and some instructions other than in-context examples help to mitigate this issue. Future research may extend our study to other LLMs of code and conduct an ablation study of instruction-tuning's impact on models' discrimination accuracy.
Experiments with Natural Language Plans. Our study focuses on the generation and discrimination of formal language plans, i.e., programs, as they can directly interact with the environment. Although feasible for mathematical reasoning (Wei et al., 2022), natural language plans require another semantic parsing step to convert them into actions defined in the corresponding environment, which may introduce intermediate errors and add noise to our analysis. Therefore, we conduct the experiments with formal language plans using LLMs trained on code data. As a future direction, it would be interesting to extend our study to natural language plans and see how the intermediate semantic parsing step would affect the overall performance of agents for mathematical reasoning.
Impact of Generators on Planning Methods.
While our work focuses on studying the relationship between different discriminators and planning methods, we acknowledge that the generator can also actively affect different planning methods. For example, we can transform the generator's perplexity into a probability and multiply it by the discriminator's score. We exclude such uses of the generator because in our preliminary experiments, we find that incorporating its perplexity leads to mixed results. These results make it even harder to analyze how language agents behave when using different planning methods. Thus, we exclude the generator to have a clear picture of how discriminators can affect planning methods. Nevertheless, it is worth studying the generator's impact on planning methods in future work. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9062ec53-0b87-44c6-a9f7-a2bc461651a1 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Acknowledgements
We would like to thank colleagues from the OSU
NLP group for their thoughtful comments. This research was supported in part by a sponsored award from Cisco Research, NSF IIS-1815674, NSF CA- REER #1942980, NSF OAC-2112606, and Ohio Supercomputer Center (Center, 1987). The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c8d75405-7814-45fd-8cf5-ff6208e802c0 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## A Implementation Details A.1 Text-To-Sql Parsing Evaluation Sets
For text-to-SQL parsing, we sub-sample the development splits of each dataset, Spider and Bird, following three steps: **(1)** categorize development set examples by difficulty levels defined in each dataset, **(2)** randomly select a database and choose one associated example, and **(3)** repeat step 2 until we have 100 samples for each difficulty level. In this way, we ensure a uniform distribution across different difficulty levels and database. Since there are four and three difficulty levels in Spider and Bird, respectively, our evaluation sets have 400 and 300 examples for each dataset.
Text-to-SQL parsing models, including LLMs, may show lower performance on our evaluation sets because of their uniformly distributed difficulty (100 examples per level). In comparison, the original datasets have skewed distributions towards easier examples. Spider's development set has 248
(24.0%) examples at easy level and 446 (43.1%)
examples at medium level, while the hard and extra hard examples only sum up to 32.9 % of the 1,034 examples. In Bird, 925 out of the 1,534 (60.3%) development set examples are at simple level, 465 examples (30.3%) are at moderate level, and only
144 examples (9.4%) are at challenging level. Our evaluation sets normalize these skewed distributions and make the macro averages of model performance less biased (Section 4.1). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
bfb23464-06ef-45c8-96b7-3c339dfe5615 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## A.2 Intrinsic Evaluation Data
To evaluate LLMs' discrimination performance, we reuse the generation results from our oraclesimulation experiments (Section 6). Specifically, we use the evaluation scripts to re-label the generated programs in simulated re-ranking experiments
| | Spider | Bird | GSM8K |
|---------------------------|----------|--------|---------|
| Number of Programs | 1,221 | 1,291 | 2,453 |
| Number of Program Pairs | 409 | 269 | 1,238 |
| Number of Program Batches | 400 | 300 | 500 |
(accuracy threshold τ = 1.0). Then, we construct our intrinsic evaluation sets based on the relabeled programs (Table A.1). Intuitively, the number of program batches for each dataset is the same as the end-to-end evaluation examples we have, and the the number of programs is all unique programs we can get from the batches. To pair the programs and calculate discrimination accuracy, we iterate through each batch and enumerate combinations of correct and wrong programs within the batch. We do not include cross-batch pairs, as those do not align with our end-to-end evaluation settings.
For discrimination accuracy, we enumerate pairs of correct and wrong programs and ask LLMs to select the better one. For classification F1, we let LLMs predict the correctness of each individual program. For Hit@1 and MRR, we use LLMs to score the batches of programs in simulation experiments. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c17cfdbf-7972-43d5-80b5-d5ba61aba922 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## A.3 Prompting And Training Llms
Prompting the Generator LM. We prompt our generator LM, CodeLlama-13B-Instruct, with temperature-based sampling for different program suggestions (Section 3.1). We use the model checkpoint and generation function implemented by Hugging Face (Wolf et al., 2020). We set the maximum generation length max_length = 300, temperature temperature = 0.6 and number of samples num_return_sequences = 5.
Data for Discriminator LMs. For text-to-SQL
parsing, we perform 2-fold cross-validation on the training sets to synthesize incorrect SQL queries for each example (Chen et al., 2023a). We prompt the LM using one pair of correct and wrong SQL queries (labeled with "Yes" and "No"), also retrieved by BM25 (Section 3.2). Alternatively, we fine-tune the LM on the entire training set with ground-truth and synthesized SQL queries to generate "Yes" or "No." For mathematical reasoning, we annotate two incorrect python programs for the two examples used in generator. Similar to textto-SQL parsing, we use the two program pairs to prompt LMs for binary question answering. Since the training set of GSM8K is not annotated with program of thoughts, we are not able to fine-tune LMs on this dataset.
Prompting Discriminator LMs. For CodeLlama-
7B-Instruct and CodeLlama-13B-Instruct (Section
3.2), we simply feed the input prompt to the models to get the last logit's values, which give us the token-level probability after applying the softmax function.
For GPT-3.5-Turbo and GPT-4-Turbo, we access them through the API of (OpenAI, 2022, 2023).
The specific model versions are gpt-3.5-turbo-1106 and gpt-4-1106-preview, respectively. We prompt the LLMs to generate one token and leverage the top_logprobs request to check the top-5 tokens and their probabilities3. If
"Yes" appears as one of the top-5 tokens, we take its probability p without any modifications. If "Yes" is missing and "No" appears as one of the top-5
tokens, we inverse its probability 1 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2c272951-43ee-4669-be8f-f1f4dfc0347f | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## A.3 Prompting And Training Llms
the API of (OpenAI, 2022, 2023).
The specific model versions are gpt-3.5-turbo-1106 and gpt-4-1106-preview, respectively. We prompt the LLMs to generate one token and leverage the top_logprobs request to check the top-5 tokens and their probabilities3. If
"Yes" appears as one of the top-5 tokens, we take its probability p without any modifications. If "Yes" is missing and "No" appears as one of the top-5
tokens, we inverse its probability 1−p as the score.
Otherwise, our implementation returns 0 if both tokens are missing, though this case should be rare or even does not happen in our experiments.
Training Discriminator LMs. To get CodeLlama-
7B-Instruct-FT and CodeLlama-13B-Instruct-FT
(Section 3.2), we again use the checkpoints and trainer implemented by Hugging Face. We finetune the models to generate the next token ("Yes" or
"No") base on the input prompts using the following hyperparameters:
- Number of epochs: 1 - Batch size: 128 - Learning rate: 1e-5
- Warmup ratio: 3% - Scheduler: cosine The inference procedure of fine-tuned models is the same as how we prompt the original LLMs, but without using any in-context example.
Computing Resources. All of our experiments on Spider and GSM8K use up to four NVIDIA RTX
A6000 GPU (48GB). Experiments on Bird use up to four NVIDIA A100 Tensor Core GPU (80GB). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
19634746-a77a-40e8-9aeb-6314f041c1dc | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## A.4 Implemendation Of Oracle Discriminator
For text-to-SQL parsing, our oracle uses the first five rows in execution results of the predicted and gold SQL query and calculate the table cell overlap. More specifically, the calculation is similar to span F1 in machine reading comprehension. Our oracle function first compares each row in the execution results head-to-head under a strong assumption that the rows are ordered. Although strict, this assumption is helpful for evaluating the correctness of SQL queries with an ORDER BY clause. Then, the function count how many table cells overlap with each other in an unordered manner. We divide the number of overlapping cells by the total number of cells in execution results of the gold SQL query (precision) and the predicted one (recall). Finally, we compute the harmonic mean of these two numbers to get the oracle score (F1).
For instance, given "-- countryid: 1, 2, 4, 5 --
countryname: usa, germany, japan, italy" as the gold execution result and "-- countryid: 1, 4, 6 --
countryname: usa, japan, japan" as the result of predicted SQL query. We compare (1, usa), (4, japan), and (6, japan) the first, second, and third row in the gold result, respectively. They have 2, 0, and 1 overlapping table cells, respectively. Thus, we have our precision to be 3/8 = 0.375 and recall to be 3/6 = 0.5. The oracle's score would be:
$${\frac{2\cdot0.375\cdot0.5}{0.375+0.5}}=0.43$$
For mathematical reasoning, our oracle directly checks if the predicted answer equals to the groundtruth. If the answer is None (non-executable program) or does not equal to the ground-truth, it returns 0. Otherwise, it returns 1. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4b3a74a0-4647-4a44-9cef-183699d27469 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## B Mcnemar'S Test For Statistical Significance
We measure the statistical significance of performance gains using the exact McNemar's Test4 (Mc-
Nemar, 1947). We choose the test's exact binomial version because our sample sizes are relatively small (Edwards, 1948), and the first two significant digits of p-values are the same for this binomial version and the original chi-square version in our tests. Intuitively, this test measures how likely the weaker method can still outperform the stronger one.
For example, we consider the comparison between tree search and iterative correction on Bird when using CodeLlama-13B-FTE as the discriminator (Section 5.2). By computing a 2 × 2 contingency table (Table B.1), McNemar's Test focuses on the 40 examples where only one of the two method have predicted correctly. Specifically, there
| | IC Correct | IC Wrong |
|------------|--------------|------------|
| TS Correct | 73 | 15 |
| TS Wrong | 25 | 187 |
are 25 examples that iterative correction finds the correct answer, but tree search does not, which is the source of performance gain. Also, there are
15 examples that iterative correction fails, but tree search succeeds. According to McNemar's Test, these 15 (37.5% of the total 40) examples result in a p-value of 0.15, meaning there is still some chance for tree search to outperform iterative correction.
In contrast, suppose there are only 10 examples that iterative correction finds the correct answer, but tree search does not. Meanwhile, there are no examples that iterative correction fails, but tree search succeeds. Then, we can still observe the same number of accuracy gain, but it is now statistically different because it is almost impossible for tree search to outperform iterative correction (0 out of 10). The same rationale also applies to the results of other tests in Section 7. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d4b753e3-5e6f-4ff5-9fc7-1a400a9ae827 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## C Prompt Examples
Given database schema and a question in natural language, generate the corresponding SQL query.
-- Database climbing:
-- Table mountain: mountain_id, name, height, prominence, range, country
-- Table climber: climber_id, name, country, time, points, mountain_id
-- Question: How many distinct countries are the climbers from?
-- SQL:
SELECT COUNT(DISTINCT country) FROM climber;
-- Database concert_singer:
-- Table stadium: stadium_id, location, name, capacity, highest, lowest, average
-- Table singer: singer_id, name, country, song_name, song_release_year, age, is_male
-- Table concert: concert_id, concert_name, theme, stadium_id, year
-- Table singer_in_concert: concert_id, singer_id
-- Question: What are all distinct countries where singers above age 20 are from?
-- SQL:
SELECT
Given database schema and a question in natural language, correct the buggy SQL query and generate a fixed SQL query.
-- Database concert_singer:
-- Table stadium: stadium_id, location, name, capacity, highest, lowest, average
-- Table singer: singer_id, name, country, song_name, song_release_year, age, is_male
-- Table concert: concert_id, concert_name, theme, stadium_id, year
-- Table singer_in_concert: concert_id, singer_id
-- Question: What are all distinct countries where singers above age 20 are from?
-- Buggy SQL:
SELECT DISTINCT country FROM singer WHERE age > 20;
-- Fixed SQL:
SELECT
Answer the following Yes/No question: Is the SQL correct given the utterance?
-- Utterance: How many different countries are all the swimmers from?
-- SQL:
SELECT COUNT(DISTINCT nationality) FROM swimmer;
-- Answer: Yes
-- Utterance: How many different countries are all the swimmers from?
-- SQL:
SELECT DISTINCT country FROM swimmer;
-- Answer: No -- Utterance: What are all distinct countries where singers above age 20 are from?
-- SQL:
SELECT DISTINCT country FROM singer WHERE age > 20 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
460d6d79-43e2-45f0-a9be-ce3cd9fd3e67 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## C Prompt Examples
question: Is the SQL correct given the utterance?
-- Utterance: How many different countries are all the swimmers from?
-- SQL:
SELECT COUNT(DISTINCT nationality) FROM swimmer;
-- Answer: Yes
-- Utterance: How many different countries are all the swimmers from?
-- SQL:
SELECT DISTINCT country FROM swimmer;
-- Answer: No -- Utterance: What are all distinct countries where singers above age 20 are from?
-- SQL:
SELECT DISTINCT country FROM singer WHERE age > 20;
-- Answer:
Answer the following Yes/No question: Is the SQL correct given the utterance and its result?
-- Utterance: How many different countries are all the swimmers from?
-- SQL:
SELECT COUNT(DISTINCT nationality) FROM swimmer;
-- Result:
-- count(distinct nationality): 7
-- Answer: Yes -- Utterance: How many different countries are all the swimmers from?
-- SQL:
SELECT DISTINCT country FROM swimmer;
-- Result:
ERROR
-- Answer: No -- Utterance: What are all distinct countries where singers above age 20 are from?
-- SQL:
SELECT DISTINCT country FROM singer WHERE age > 20;
-- Result:
-- country: Netherlands, United States, France
-- Answer: | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e66934f4-1bca-492e-baab-bed5276f8ceb | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Given questions in the comment, use python programs to produce the correct answers with the 'answer' variable. ## James takes 2 Tylenol tablets that are 375 mg each, every 6 hours.
How many mg does he take a day? ## Python Program:
mg_tylenol_per_tablet = 375 mg_tylenol_taken_each_time = 2 * mg_tylenol_per_tablet hours_per_day = 24 times_per_day = hours_per_day / 6 mg_each_day = mg_tylenol_taken_each_time * times_per_day answer = mg_each_day ## There were 63 Easter eggs in the yard.
Hannah found twice as many as Helen.
How many Easter eggs did Hannah find? ## Python Program: n_easter_eggs = 63 unit_times = 2
total_units = unit_times + 1 n_easter_eggs_per_unit = n_easter_eggs / total_units n_easter_eggs_helen = n_easter_eggs_per_unit * 1 n_easter_eggs_hannah = n_easter_eggs_per_unit * 2 answer = n_easter_eggs_hannah | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
40ba4721-47d1-4cda-ae6f-0092a1d8c4e8 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Gloria is shoe shopping when she comes across a pair of boots that fit her shoe
budget.
However, she has to choose between the boots and two pairs of high heels that
together cost five dollars less than the boots. If one pair of heels costs $33 and the other
costs twice as much, how many dollars are the boots?
budget.
However, she has to choose between the boots and two pairs of high heels that
together cost five dollars less than the boots. If one pair of heels costs $33 and the other
costs twice as much, how many dollars are the boots? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b24d002e-a21c-4517-913d-599c82b4ec14 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Buggy Python Program:
price_boots = 50
price_heels = 33
price_heels_twice = 2 * price_heels
price_heels_total = price_heels + price_heels_twice
price_boots_difference = price_boots - price_heels_total
answer = price_boots_difference | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
504e3178-ef4b-4076-9058-9dbc02672a55 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## James takes 2 Tylenol tablets that are 375 mg each,
every 6 hours.
How many mg
does he take a day? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f78e9e5e-476c-4dc0-8f95-72df8f083154 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Python Program:
mg_tylenol_per_tablet = 375
mg_tylenol_taken_each_time = 2 * mg_tylenol_per_tablet
hours_per_day = 24
times_per_day = hours_per_day / 6
mg_each_day = mg_tylenol_taken_each_time * times_per_day
answer = mg_each_day | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1f7aca51-9a54-435b-84e1-a8412f2758c8 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## James takes 2 Tylenol tablets that are 375 mg each,
every 6 hours.
How many mg
does he take a day? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c450fa86-c3e2-4c0e-a225-35ed40e3e01d | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Python Program:
mg_per_tablet = 375
n_tablets_per_day = 2
n_tablets_per_6hrs = n_tablets_per_day / 6
mg_per_6hrs = mg_per_tablet * n_tablets_per_6hrs
answer = mg_per_6hrs | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0563fee2-ba9b-414e-9895-fc6c0e32774b | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## There were 63 Easter eggs in the yard.
Hannah found twice as many as Helen.
How
many Easter eggs did Hannah find? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
aa9b5f67-14c4-454e-9cfa-0e377a16f915 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Python Program:
n_easter_eggs = 63
unit_times = 2
total_units = unit_times + 1
n_easter_eggs_per_unit = n_easter_eggs / total_units
n_easter_eggs_helen = n_easter_eggs_per_unit * 1
n_easter_eggs_hannah = n_easter_eggs_per_unit * 2
answer = n_easter_eggs_hannah | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
333ee4e2-44e3-4510-ad34-56510a33a47b | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## There were 63 Easter eggs in the yard.
Hannah found twice as many as Helen.
How
many Easter eggs did Hannah find? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
012933d6-cea7-4431-8c57-830cc04c7426 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Python Program:
eggs_in_yard = 63
eggs_found_by_hannah = 2 * eggs_in_yard
eggs_found_by_helen = eggs_found_by_hannah / 2
answer = eggs_found_by_hannah | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
439382a2-2622-4509-9d92-36f4d27aeaf0 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Gloria is shoe shopping when she comes across a pair of boots that fit her shoe
budget.
However, she has to choose between the boots and two pairs of high heels that
together cost five dollars less than the boots. If one pair of heels costs $33 and the other
costs twice as much, how many dollars are the boots? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4bbf5650-09ca-423f-a7ef-d749c03ea14f | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Python Program:
price_boots = 50
price_heels = 33
price_heels_twice = 2 * price_heels
price_heels_total = price_heels + price_heels_twice
price_boots_difference = price_boots - price_heels_total
answer = price_boots_difference | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a957a4f0-4aab-44a3-866d-4271a0ccf631 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## James takes 2 Tylenol tablets that are 375 mg each,
every 6 hours.
How many mg
does he take a day? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a2cf206a-3971-47a7-8268-7af8e0fca42a | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Python Program:
mg_tylenol_per_tablet = 375
mg_tylenol_taken_each_time = 2 * mg_tylenol_per_tablet
hours_per_day = 24
times_per_day = hours_per_day / 6
mg_each_day = mg_tylenol_taken_each_time * times_per_day
answer = mg_each_day | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a6b0c03b-4f1c-4a14-b5ec-26926091b84a | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## James takes 2 Tylenol tablets that are 375 mg each,
every 6 hours.
How many mg
does he take a day? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6b12e095-dae1-4c1f-8de5-66ea067205f4 | # When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator
## Python Program:
mg_per_tablet = 375
n_tablets_per_day = 2
n_tablets_per_6hrs = n_tablets_per_day / 6
mg_per_6hrs = mg_per_tablet * n_tablets_per_6hrs
answer = mg_per_6hrs | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10890v1.md",
"file_path": "paper_data/2402.10890v1.md",
"file_size": 67960,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
AutoRAG evaluation dataset
Made with 2024 LLM resesarch articles (papers)
This dataset is an example for AutoRAG. You can directly use this dataset for optimizng and benchmarking your RAG setup in AutoRAG.
How this dataset created?
This dataset is 100% synthetically generated by GPT-4 and Marker Inc.
technology.
At first, we collected 110 latest LLM papers at arxiv.
We used Marker
OCR model to extract texts.
And chunk it using MarkdownSplitter and TokenSplitter from Langchain.
For more quality, we delete all References
in the research articles.
And then, it randomly select 520 passages from chunked corpus for generating question.
At last, our custom pipeline generates various and unique questions with GPT-4.
Acknowledgements
This dataset's corpus is originated various LLM related research articles on arixv. Marker Inc. do not have copyright or any rights about corpus content itself.
Plus, this is a alpha version of our evaluation data generation pipeline without human verification, so its quality might be lower than human-generated dataset.
- Downloads last month
- 38