Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models Paper • 2411.14257 • Published Nov 21, 2024 • 9
Distinguishing Ignorance from Error in LLM Hallucinations Paper • 2410.22071 • Published Oct 29, 2024
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations Paper • 2410.18860 • Published Oct 24, 2024 • 9
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation Paper • 2410.11779 • Published Oct 15, 2024 • 25
LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations Paper • 2410.02707 • Published Oct 3, 2024 • 48
Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation Paper • 2402.13331 • Published Feb 20, 2024 • 2
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection Paper • 2402.03744 • Published Feb 6, 2024 • 4
Fine-grained Hallucination Detection and Editing for Language Models Paper • 2401.06855 • Published Jan 12, 2024 • 4
The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground Responses to Long-Form Input Paper • 2501.03200 • Published 10 days ago • 1