Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training
Abstract
Reinforce-Ada is an adaptive sampling framework for online reinforcement learning post-training of large language models, which accelerates convergence and improves performance by dynamically reallocating sampling effort based on prompt uncertainty.
Reinforcement learning applied to large language models (LLMs) for reasoning tasks is often bottlenecked by unstable gradient estimates due to fixed and uniform sampling of responses across prompts. Prior work such as GVM-RAFT addresses this by dynamically allocating inference budget per prompt to minimize stochastic gradient variance under a budget constraint. Inspired by this insight, we propose Reinforce-Ada, an adaptive sampling framework for online RL post-training of LLMs that continuously reallocates sampling effort to the prompts with the greatest uncertainty or learning potential. Unlike conventional two-stage allocation methods, Reinforce-Ada interleaves estimation and sampling in an online successive elimination process, and automatically stops sampling for a prompt once sufficient signal is collected. To stabilize updates, we form fixed-size groups with enforced reward diversity and compute advantage baselines using global statistics aggregated over the adaptive sampling phase. Empirical results across multiple model architectures and reasoning benchmarks show that Reinforce-Ada accelerates convergence and improves final performance compared to GRPO, especially when using the balanced sampling variant. Our work highlights the central role of variance-aware, adaptive data curation in enabling efficient and reliable reinforcement learning for reasoning-capable LLMs. Code is available at https://github.com/RLHFlow/Reinforce-Ada.
Community
🤔 The Problem: Where are my training signals going?
You might be wasting up to 60% of your compute. In RL methods like GRPO/RFT, prompts often generate "zero-signal" sample groups (all-pass or all-fail). No reward variance means no gradient, which equals wasted GPU cycles.
Common fixes fall short:
❌ Just drop them (DAPO)? A temporary fix that leaves many prompts untrained long-term.
❌ More sampling (n=256) is expensive with diminishing returns.
💡 Our First Step: GVM (NIPS’25)
- Our NIPS'25 paper, GVM, was the first to provide a theoretical answer from the perspective of 「gradient variance minimization」: the most efficient training method is to allocate the compute budget based on prompt difficulty.
- Core Idea: Harder prompts deserve more sampling opportunities.
- Implementation: An "explore-exploit" two-stage strategy: first, estimate difficulty with a few samples, then allocate the budget accordingly.
- But… complex and difficulty estimation from few samples were often inaccurate…
✨ The new Answer: Online Adaptive Sampling (Reinforce-ada)
The core of this new report is a simpler, more elegant "online" strategy that perfectly solves GVM's bottlenecks. We've merged estimation and allocation into a single, unified process:
- Keep sampling until signal is enough.
- Auto-Stop: As soon as a prompt yields sufficient signal (i.e., has both successes and failures), we stop sampling it. This automatically forces the model to keep "practicing" on the tougher prompts.
- Key Advantage: This process implicitly and dynamically performs difficulty assessment and resource allocation, making it precise and highly efficient.
🚀 Results & Implementation:
- One-Line Code Change: Extremely low-cost integration—just replace the verl sampling function.
- Stable Improvements: Delivered consistent performance growth across different foundation model classes.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Improving Sampling Efficiency in RLVR through Adaptive Rollout and Response Reuse (2025)
- Prompt Curriculum Learning for Efficient LLM Post-Training (2025)
- No Prompt Left Behind: Exploiting Zero-Variance Prompts in LLM Reinforcement Learning via Entropy-Guided Advantage Shaping (2025)
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources (2025)
- Single-stream Policy Optimization (2025)
- DCPO: Dynamic Clipping Policy Optimization (2025)
- Sample More to Think Less: Group Filtered Policy Optimization for Concise Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper