Papers
arxiv:2509.21875

LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals

Published on Sep 26
· Submitted by Min-Hsuan Yeh on Sep 30
Authors:
,
,

Abstract

LUMINA detects hallucinations in RAG systems by quantifying external context utilization and internal knowledge utilization, outperforming existing methods on benchmarks.

AI-generated summary

Retrieval-Augmented Generation (RAG) aims to mitigate hallucinations in large language models (LLMs) by grounding responses in retrieved documents. Yet, RAG-based LLMs still hallucinate even when provided with correct and sufficient context. A growing line of work suggests that this stems from an imbalance between how models use external context and their internal knowledge, and several approaches have attempted to quantify these signals for hallucination detection. However, existing methods require extensive hyperparameter tuning, limiting their generalizability. We propose LUMINA, a novel framework that detects hallucinations in RAG systems through context-knowledge signals: external context utilization is quantified via distributional distance, while internal knowledge utilization is measured by tracking how predicted tokens evolve across transformer layers. We further introduce a framework for statistically validating these measurements. Experiments on common RAG hallucination benchmarks and four open-source LLMs show that LUMINA achieves consistently high AUROC and AUPRC scores, outperforming prior utilization-based methods by up to +13% AUROC on HalluRAG. Moreover, LUMINA remains robust under relaxed assumptions about retrieval quality and model matching, offering both effectiveness and practicality.

Community

Paper submitter

New paper: LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals

RAG systems can still hallucinate, often due to conflicts between an LLM's internal knowledge and the retrieved external context. Quantifying the usage of internal knowledge and the external context of an LLM when generating a response can thus be used to detect hallucinations.

In this paper:

  • We propose LUMINA, a novel approach to quantify utilization of external context and internal knowledge for RAG-based hallucination detection.
  • We propose a framework to statistically validate LUMINA, showing that they align with the intended results.
  • We conduct extensive experiments and show that LUMINA outperforms both score-based and learning-based methods in hallucination detection, establishing a new state-of-the-art.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.21875 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.21875 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.21875 in a Space README.md to link it from this page.

Collections including this paper 1