Papers
arxiv:2501.04519

rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking

Published on Jan 8
· Submitted by lynazhang on Jan 9
#1 Paper of the day
Authors:
,
,
,
,

Abstract

We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising "deep thinking" through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-time search guided by an SLM-based process reward model. rStar-Math introduces three innovations to tackle the challenges in training the two SLMs: (1) a novel code-augmented CoT data sythesis method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories used to train the policy SLM; (2) a novel process reward model training method that avoids na\"ive step-level score annotation, yielding a more effective process preference model (PPM); (3) a self-evolution recipe in which the policy SLM and PPM are built from scratch and iteratively evolved to improve reasoning capabilities. Through 4 rounds of self-evolution with millions of synthesized solutions for 747k math problems, rStar-Math boosts SLMs' math reasoning to state-of-the-art levels. On the MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%. On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among the top 20% the brightest high school math students. Code and data will be available at https://github.com/microsoft/rStar.

Community

Paper author Paper submitter

We present rStar-Math to demonstrate that small language models (SLMs, 1.5B-7B) can rival or even surpass the math reasoning capability of OpenAI o1

holy... shit?

github link isn't working.

·
Paper author

As we are still undergoing the internal review process for open-source release, the repository remains private for now. Please stay tuned!

very impressive, I love the simplicity of using Q values as annotations! you mention 64 trajectories as some sort of saturation bound, is that right or have you just not tried scaling this approach even more?

·
Paper author

Thank you! On challenging math benchmarks such as AIME, performance nearly saturates with 64 trajectories. For college math, performance continues to improve steadily; however, we did not scale beyond 64 due to the increased search cost. We believe AIME performance can be further improved by synthesizing additional Olympiad-level math problems to improve both the policy model and the process reward model. We leave this as our future work.

Thank you for sharing this work. I appreciate the blend of Monte Carlo Tree Search with smaller models to address step-by-step math reasoning. The idea of generating self-verified solutions rather than relying on a larger teacher model is promising, and it is good to see how you handle the complexity of code-based rollouts. I am curious how this approach might adapt to tasks that involve geometric proofs or more symbolic reasoning. It would also be interesting to learn about the practical limits when problems become highly intricate. Overall, this is a thoughtful piece of research, and I look forward to any future expansions into broader math domains.

·

Thank you for your comments! We currently have limited experience with tasks involving more symbolic reasoning. However, based on our understanding, the MCTS-based approach can adapt well to such tasks. You might find AlphaGeometry (https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/) and DeepSeek-Prover1.5 (https://arxiv.org/abs/2408.08152) to be valuable references for exploring this direction further.

This is an incredibly impressive paper, and I’m very much looking forward to seeing the open-source code and the detailed development process.

We created a deep-dive video for this paper: https://www.youtube.com/watch?v=cHgHS6Y3QP0
Love to hear your feedback!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Very interesting work! I was curious if there is any section addressing data decontamination. From what I understand, Numina Math may include a notable portion of problems from OlympiadBench and Omni-Math.

·

Thank you for your question! Decontamination is indeed critical for ensuring unbiased model performance evaluation. We tried our best to address this, including problem matching to identify and remove contaminated training samples from the dataset. For most of our evaluation benchmarks, such as GSM8K, AIME, AMC, CollegeMath and Gaokao, we did not find significant contamination. For MATH, OlympiadBench and Omni-Math, we identified a few hundred potentially contaminated examples and removed them from the training set to maintain the integrity of our evaluations.

This is like self-play to learn Go. It should be able to dramatically improve coding skills too.

very impressive paper, congrats !

This is a very nice work. Is it possible to measure original Qwen models with your PPM?
Could you clarify how trajectory counting works? For instance, I start with 64 trajectories and all of solutions have 10 steps. After each step, I retain only the 32 best paths. Then, I split each of these 32 paths into two, resulting in 64 trajectories again. I repeat this process 10 times (since the solution has 10 steps). In this case, how many trajectories do I have in total? Is it just 64, or is it 64+32×10?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.04519 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.04519 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.04519 in a Space README.md to link it from this page.

Collections including this paper 27