Training Language Models to Self-Correct via Reinforcement Learning Paper • 2409.12917 • Published Sep 19 • 135
FactAlign: Long-form Factuality Alignment of Large Language Models Paper • 2410.01691 • Published Oct 2 • 8
LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations Paper • 2410.02707 • Published Oct 3 • 47