Training Language Models to Self-Correct via Reinforcement Learning Paper • 2409.12917 • Published Sep 19, 2024 • 136
FactAlign: Long-form Factuality Alignment of Large Language Models Paper • 2410.01691 • Published Oct 2, 2024 • 8
LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations Paper • 2410.02707 • Published Oct 3, 2024 • 48
ECon: On the Detection and Resolution of Evidence Conflicts Paper • 2410.04068 • Published Oct 5, 2024