Abstract
SimpleVLA-RL, an RL framework for VLA models, enhances long-horizon action planning, achieves state-of-the-art performance, and discovers novel patterns during training.
Vision-Language-Action (VLA) models have recently emerged as a powerful paradigm for robotic manipulation. Despite substantial progress enabled by large-scale pretraining and supervised fine-tuning (SFT), these models face two fundamental challenges: (i) the scarcity and high cost of large-scale human-operated robotic trajectories required for SFT scaling, and (ii) limited generalization to tasks involving distribution shift. Recent breakthroughs in Large Reasoning Models (LRMs) demonstrate that reinforcement learning (RL) can dramatically enhance step-by-step reasoning capabilities, raising a natural question: Can RL similarly improve the long-horizon step-by-step action planning of VLA? In this work, we introduce SimpleVLA-RL, an efficient RL framework tailored for VLA models. Building upon veRL, we introduce VLA-specific trajectory sampling, scalable parallelization, multi-environment rendering, and optimized loss computation. When applied to OpenVLA-OFT, SimpleVLA-RL achieves SoTA performance on LIBERO and even outperforms pi_0 on RoboTwin 1.0\&2.0 with the exploration-enhancing strategies we introduce. SimpleVLA-RL not only reduces dependence on large-scale data and enables robust generalization, but also remarkably surpasses SFT in real-world tasks. Moreover, we identify a novel phenomenon ``pushcut'' during RL training, wherein the policy discovers previously unseen patterns beyond those seen in the previous training process. Github: https://github.com/PRIME-RL/SimpleVLA-RL
Community
We introduce SimpleVLA-RL, an efficient integrated training-inference-rendering VLA (Vision-Language-Action) reinforcement learning framework. We achieved 99% SOTA performance on LIBERO, an 80% relative improvement on Robotwin 1.0 & 2.0, significantly surpassing advanced models like pi0. Additionally, we achieved a 120% relative improvement on real robots, outperforming RDT. Furthermore, VLA RL alleviates the data scarcity challenge of SFT and substantially enhances the generalization capacity of VLA models.
GitHub: https://github.com/PRIME-RL/SimpleVLA-RL
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CO-RFT: Efficient Fine-Tuning of Vision-Language-Action Models through Chunked Offline Reinforcement Learning (2025)
- H-RDT: Human Manipulation Enhanced Bimanual Robotic Manipulation (2025)
- LLaDA-VLA: Vision Language Diffusion Action Models (2025)
- Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation (2025)
- ReconVLA: Reconstructive Vision-Language-Action Model as Effective Robot Perceiver (2025)
- Reinforcement Learning in Vision: A Survey (2025)
- VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper