Direct Preference Optimization: Your Language Model is Secretly a Reward Model Paper • 2305.18290 • Published May 29, 2023 • 52
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training Paper • 2306.01693 • Published Jun 2, 2023 • 3
Secrets of RLHF in Large Language Models Part II: Reward Modeling Paper • 2401.06080 • Published Jan 11, 2024 • 26
The Lessons of Developing Process Reward Models in Mathematical Reasoning Paper • 2501.07301 • Published 3 days ago • 68
O1 Replication Journey -- Part 3: Inference-time Scaling for Medical Reasoning Paper • 2501.06458 • Published 5 days ago • 27
Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs Paper • 2412.21187 • Published 17 days ago • 35