Papers
arxiv:2510.25065

Reasoning-Aware GRPO using Process Mining

Authors:

Abstract

Reinforcement learning (RL)-based post-training has been crucial for enabling multi-step reasoning in large reasoning models (LRMs), yet current reward schemes are typically outcome-centric. We propose PM4GRPO, a reasoning-aware Group Relative Policy Optimization (GRPO) that augments standard answer/format rewards with signals over the reasoning procedure. To this end, process mining techniques are utilized to compute a scalar conformance reward that measures how closely a policy model's reasoning aligns with the pretrained teacher model. The empirical results on five benchmarks demonstrate that PM4GRPO significantly outperforms existing methodologies for GRPO-based post-training. These results highlight that leveraging process mining for reasoning-aware GRPO effectively enhances the reasoning capabilities of policy models.

Community

Paper author Paper submitter

PM4GRPO incorporates the reasoning process through Process Mining into the post-training phase. This enhancement allows the Policy Optimization method to better enable the policy model to imitate the reasoning process of the teacher model. In other words, PM4GRPO achieves Reasoning-Aware Policy Optimization through PROCESS MINING.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.25065 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.25065 in a Space README.md to link it from this page.

Collections including this paper 1