Abstract
As language models regularly make mistakes when solving math problems, automated identification of errors in the reasoning process becomes increasingly significant for their scalable oversight. In this paper, we introduce ProcessBench for measuring the ability to identify erroneous steps in mathematical reasoning. It consists of 3,400 test cases, primarily focused on competition- and Olympiad-level math problems. Each test case contains a step-by-step solution with error location annotated by human experts. Models are required to identify the earliest step that contains an error, or conclude that all steps are correct. We conduct extensive evaluation on ProcessBench, involving two types of models: process reward models (PRMs) and critic models, where for the latter we prompt general language models to critique each solution step by step. We draw two main observations: (1) Existing PRMs typically fail to generalize to more challenging math problems beyond GSM8K and MATH. They underperform both critic models (i.e., prompted general language models) and our own trained PRM that is straightforwardly fine-tuned on the PRM800K dataset. (2) The best open-source model, QwQ-32B-Preview, has demonstrated the critique capability competitive with the proprietary model GPT-4o, despite that it still lags behind the reasoning-specialized o1-mini. We hope ProcessBench can foster future research in reasoning process assessment, paving the way toward scalable oversight of language models.
Community
We introduce ProcessBench for measuring the ability to identify erroneous steps in mathematical reasoning.
Data: https://huggingface.co/datasets/Qwen/ProcessBench
Evaluation code: https://github.com/QwenLM/ProcessBench
Here are some intriguing conclusions from a few experiments:
Presently, various PRMs that are based on MCTS for training data construction may not perform as effectively as directly training with the PRM800K dataset.
The more challenging the dataset, the higher the proportion of cases where the answer is correct but the process leading to it is flawed. In datasets of Omini-MATH level difficulty, this phenomenon occurs in over 50% of instances. Therefore, relying solely on answer matching as the reward rule might lead to scaling issues in the future.
Surprisingly, the reasoning model QwQ-32B-preview, which was not designed for the critic role and has not been trained on related data, performs exceptionally well in the critic function, surpassing all known PRM models to date.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Enhancing Mathematical Reasoning in LLMs by Stepwise Correction (2024)
- Not All Votes Count! Programs as Verifiers Improve Self-Consistency of Language Models for Math Reasoning (2024)
- Preference Optimization for Reasoning with Pseudo Feedback (2024)
- Guiding Through Complexity: What Makes Good Supervision for Hard Reasoning Tasks? (2024)
- A Comparative Study on Reasoning Patterns of OpenAI's o1 Model (2024)
- ReasonAgain: Using Extractable Symbolic Programs to Evaluate Mathematical Reasoning (2024)
- Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper