Abstract
AEPO, an agentic RL algorithm, addresses entropy-related challenges in web agent training, enhancing performance and stability across various datasets.
Recently, Agentic Reinforcement Learning (Agentic RL) has made significant progress in incentivizing the multi-turn, long-horizon tool-use capabilities of web agents. While mainstream agentic RL algorithms autonomously explore high-uncertainty tool-call steps under the guidance of entropy, excessive reliance on entropy signals can impose further constraints, leading to the training collapse. In this paper, we delve into the challenges caused by entropy and propose the Agentic Entropy-Balanced Policy Optimization (AEPO), an agentic RL algorithm designed to balance entropy in both the rollout and policy update phases. AEPO comprises two core components: (1) a dynamic entropy-balanced rollout mechanism that adaptively allocate global and branch sampling budget through entropy pre-monitoring, while imposing a branch penalty on consecutive high-entropy tool-call steps to prevent over-branching issues; and (2) Entropy-Balanced Policy Optimization that inserts a stop-gradient operation into the high-entropy clipping term to preserve and properly rescale gradients on high-entropy tokens, while incorporating entropy-aware advantage estimation to prioritize learning on high-uncertainty tokens. Results across 14 challenging datasets show that AEPO consistently outperforms 7 mainstream RL algorithms. With just 1K RL samples, Qwen3-14B with AEPO achieves impressive results: 47.6% on GAIA, 11.2% on Humanity's Last Exam, and 43.0% on WebWalker for Pass@1; 65.0% on GAIA, 26.0% on Humanity's Last Exam, and 70.0% on WebWalker for Pass@5. Further analysis reveals that AEPO improves rollout sampling diversity while maintaining stable policy entropy, facilitating scalable web agent training.
Community
We propose Agentic Entropy-Balanced Policy Optimization (AEPO), an entropy-balanced agentic RL algorithm designed for training multi-turn web agents. AEPO focuses on balancing and rationalizing rollout branching and policy updates under the guidance of high-entropy tool calls, thereby achieving more stable RL training.
With just 1𝐾 RL samples, Qwen3-14B with AEPO achieves impressive results: 47.6% on GAIA, 11.2% on Humanity’s Last Exam, and 43.0% on WebWalkerQA for Pass@1; 65.0% on GAIA, 26.0% on Humanity’s Last Exam, and 70.0% on
WebWalkerQA for Pass@5.
🔧 All the code, datasets and model checkpoints of AEPO are fully open-sourced:
Github: https://github.com/dongguanting/ARPO
Models: https://huggingface.co/collections/dongguanting/aepo-68ef6832c99697ee03d5e1c7
🔥 Key Insights:
We systematically reveal two entropy-driven issues inherent to agentic RL: "High-Entropy Rollout Collapse" and "High-Entropy Token Gradient Clipping" (as shown in the above figure). Through preliminary experiments, we quantify their impact on multi-turn web-agent training, offering empirical evidence for further research into entropy balancing.
We propose a Dynamic Entropy-Balanced Rollout mechanism , which adaptively allocates rollout sampling budgets via entropy pre-monitoring, while imposing a branch penalty on consecutive high-entropy steps to prevent over-branching issues.
We introduce Entropy-Balanced Policy Optimization , which intuitively integrates a stop-gradient operation into the high-entropy clipping term to preserve and rescale gradients on high-entropy tokens, while incorporating entropy-aware advantage estimation to prioritize learning on high-uncertainty tokens.
Experiments on 14 challenging benchmarks demonstrate that AEPO consistently outperforms 7 mainstream RL algorithms in web agent training. With just 1𝐾 RL samples, Qwen3-14B with AEPO achieves impressive results: 47.6% on GAIA, 11.2% on Humanity’s Last Exam, and 43.0% on WebWalkerQA for Pass@1; 65.0% on GAIA, 26.0% on Humanity’s Last Exam, and 70.0% on WebWalkerQA for Pass@5.
✨ Two entropy-driven challenges:
🔥 Overview of AEPO:
arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/agentic-entropy-balanced-policy-optimization
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- From Uniform to Heterogeneous: Tailoring Policy Optimization to Every Token's Nature (2025)
- EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement Learning (2025)
- Demystifying Reinforcement Learning in Agentic Reasoning (2025)
- Rediscovering Entropy Regularization: Adaptive Coefficient Unlocks Its Potential for LLM Reinforcement Learning (2025)
- CE-GPPO: Coordinating Entropy via Gradient-Preserving Clipping Policy Optimization in Reinforcement Learning (2025)
- Learn the Ropes, Then Trust the Wins: Self-imitation with Progressive Exploration for Agentic Reinforcement Learning (2025)
- Unlocking Exploration in RLVR: Uncertainty-aware Advantage Shaping for Deeper Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 3
Spaces citing this paper 0
No Space linking this paper