I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token
Abstract
Large Language Models are known to capture real-world knowledge, allowing them to excel in many downstream tasks. Despite recent advances, these models are still prone to what are commonly known as hallucinations, causing them to emit unwanted and factually incorrect text. In this work, we propose a novel calibration method that can be used to combat hallucinations. We add a special [IDK] ("I don't know") token to the model's vocabulary and introduce an objective function that shifts probability mass to the [IDK] token for incorrect predictions. This approach allows the model to express uncertainty in its output explicitly. We evaluate our proposed method across multiple model architectures and factual downstream tasks. We find that models trained with our method are able to express uncertainty in places where they would previously make mistakes while suffering only a small loss of encoded knowledge. We further perform extensive ablation studies of multiple variations of our approach and provide a detailed analysis of the precision-recall tradeoff of our method.
Community
LLMs are yet to fully internalize the concept of uncertainty. But can we pretrain them better for that?
In our new paper "I Don’t Know: Explicit Modeling of Uncertainty with an [IDK] Token" we propose a new objective for uncertainty training and show how to utilize it to upgrade factual accuracy.
Check it out!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- From Uncertainty to Trust: Enhancing Reliability in Vision-Language Models with Uncertainty-Guided Dropout Decoding (2024)
- Enhancing Trust in Large Language Models with Uncertainty-Aware Fine-Tuning (2024)
- Uncovering Uncertainty in Transformer Inference (2024)
- Semantic Consistency-Based Uncertainty Quantification for Factuality in Radiology Report Generation (2024)
- Post-hoc Probabilistic Vision-Language Models (2024)
- A Survey on Uncertainty Quantification of Large Language Models: Taxonomy, Open Research Challenges, and Future Directions (2024)
- Beyond Logit Lens: Contextual Embeddings for Robust Hallucination Detection&Grounding in VLMs (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper