LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals
Abstract
LUMINA detects hallucinations in RAG systems by quantifying external context utilization and internal knowledge utilization, outperforming existing methods on benchmarks.
Retrieval-Augmented Generation (RAG) aims to mitigate hallucinations in large language models (LLMs) by grounding responses in retrieved documents. Yet, RAG-based LLMs still hallucinate even when provided with correct and sufficient context. A growing line of work suggests that this stems from an imbalance between how models use external context and their internal knowledge, and several approaches have attempted to quantify these signals for hallucination detection. However, existing methods require extensive hyperparameter tuning, limiting their generalizability. We propose LUMINA, a novel framework that detects hallucinations in RAG systems through context-knowledge signals: external context utilization is quantified via distributional distance, while internal knowledge utilization is measured by tracking how predicted tokens evolve across transformer layers. We further introduce a framework for statistically validating these measurements. Experiments on common RAG hallucination benchmarks and four open-source LLMs show that LUMINA achieves consistently high AUROC and AUPRC scores, outperforming prior utilization-based methods by up to +13% AUROC on HalluRAG. Moreover, LUMINA remains robust under relaxed assumptions about retrieval quality and model matching, offering both effectiveness and practicality.
Community
New paper: LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals
RAG systems can still hallucinate, often due to conflicts between an LLM's internal knowledge and the retrieved external context. Quantifying the usage of internal knowledge and the external context of an LLM when generating a response can thus be used to detect hallucinations.
In this paper:
- We propose LUMINA, a novel approach to quantify utilization of external context and internal knowledge for RAG-based hallucination detection.
- We propose a framework to statistically validate LUMINA, showing that they align with the intended results.
- We conduct extensive experiments and show that LUMINA outperforms both score-based and learning-based methods in hallucination detection, establishing a new state-of-the-art.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- D$^2$HScore: Reasoning-Aware Hallucination Detection via Semantic Breadth and Depth Analysis in LLMs (2025)
- Unsupervised Hallucination Detection by Inspecting Reasoning Processes (2025)
- MetaRAG: Metamorphic Testing for Hallucination Detection in RAG Systems (2025)
- GLSim: Detecting Object Hallucinations in LVLMs via Global-Local Similarity (2025)
- Turk-LettuceDetect: A Hallucination Detection Models for Turkish RAG Applications (2025)
- D-LEAF: Localizing and Correcting Hallucinations in Multimodal LLMs via Layer-to-head Attention Diagnostics (2025)
- Decoding Memories: An Efficient Pipeline for Self-Consistency Hallucination Detection (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper