Papers
arxiv:2412.06559

ProcessBench: Identifying Process Errors in Mathematical Reasoning

Published on Dec 9
· Submitted by chujiezheng on Dec 10
#2 Paper of the day
Authors:
,
,

Abstract

As language models regularly make mistakes when solving math problems, automated identification of errors in the reasoning process becomes increasingly significant for their scalable oversight. In this paper, we introduce ProcessBench for measuring the ability to identify erroneous steps in mathematical reasoning. It consists of 3,400 test cases, primarily focused on competition- and Olympiad-level math problems. Each test case contains a step-by-step solution with error location annotated by human experts. Models are required to identify the earliest step that contains an error, or conclude that all steps are correct. We conduct extensive evaluation on ProcessBench, involving two types of models: process reward models (PRMs) and critic models, where for the latter we prompt general language models to critique each solution step by step. We draw two main observations: (1) Existing PRMs typically fail to generalize to more challenging math problems beyond GSM8K and MATH. They underperform both critic models (i.e., prompted general language models) and our own trained PRM that is straightforwardly fine-tuned on the PRM800K dataset. (2) The best open-source model, QwQ-32B-Preview, has demonstrated the critique capability competitive with the proprietary model GPT-4o, despite that it still lags behind the reasoning-specialized o1-mini. We hope ProcessBench can foster future research in reasoning process assessment, paving the way toward scalable oversight of language models.

Community

Paper author Paper submitter

We introduce ProcessBench for measuring the ability to identify erroneous steps in mathematical reasoning.

·

Here are some intriguing conclusions from a few experiments:

  1. Presently, various PRMs that are based on MCTS for training data construction may not perform as effectively as directly training with the PRM800K dataset.

  2. The more challenging the dataset, the higher the proportion of cases where the answer is correct but the process leading to it is flawed. In datasets of Omini-MATH level difficulty, this phenomenon occurs in over 50% of instances. Therefore, relying solely on answer matching as the reward rule might lead to scaling issues in the future.

  3. Surprisingly, the reasoning model QwQ-32B-preview, which was not designed for the critic role and has not been trained on related data, performs exceptionally well in the critic function, surpassing all known PRM models to date.

This approach to handle the somewhat arbitrary splitting of solutions into steps via double line breaks is very clever - nice idea!

Screenshot 2024-12-10 at 21.23.40.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.06559 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.06559 in a Space README.md to link it from this page.

Collections including this paper 2