Maximizing Alignment with Minimal Feedback: Efficiently Learning Rewards for Visuomotor Robot Policy Alignment
Abstract
Visuomotor robot policies, increasingly pre-trained on large-scale datasets, promise significant advancements across robotics domains. However, aligning these policies with end-user preferences remains a challenge, particularly when the preferences are hard to specify. While reinforcement learning from human feedback (RLHF) has become the predominant mechanism for alignment in non-embodied domains like large language models, it has not seen the same success in aligning visuomotor policies due to the prohibitive amount of human feedback required to learn visual reward functions. To address this limitation, we propose Representation-Aligned Preference-based Learning (RAPL), an observation-only method for learning visual rewards from significantly less human preference feedback. Unlike traditional RLHF, RAPL focuses human feedback on fine-tuning pre-trained vision encoders to align with the end-user's visual representation and then constructs a dense visual reward via feature matching in this aligned representation space. We first validate RAPL through simulation experiments in the X-Magical benchmark and Franka Panda robotic manipulation, demonstrating that it can learn rewards aligned with human preferences, more efficiently uses preference data, and generalizes across robot embodiments. Finally, our hardware experiments align pre-trained Diffusion Policies for three object manipulation tasks. We find that RAPL can fine-tune these policies with 5x less real human preference data, taking the first step towards minimizing human feedback while maximizing visuomotor robot policy alignment.
Community
Our paper, “Maximizing Alignment with Minimal Feedback: Efficiently Learning Rewards for Visuomotor Robot Policy Alignment,” aims to bring the success of preference alignment popularized in non-embodied foundation models (e.g., LLMs) to visuomotor robotics models. This transition is challenging due to the extensive amount of human feedback required to learn visual reward functions. To address this, we propose an observation-only, data-efficient approach for aligning visuomotor policies with end-user preferences.
In this submission, we demonstrate how our method can alleviate human feedback burden while still ensuring high-quality robot visuomotor policy alignment. We aim to inspire future research on aligning next-generation visuomotor policies with user needs while significantly reducing the human labeling burden.
The attached is a video example. Here, we want the robot to help us pick up a fork and put it in the bowl. The robot’s visuomotor policy is trained to imitate diverse teleoperators; thus, the pre-trained policy frequently makes contact with the tines of the fork and drops it out of the bowl. With just 20 human preference labels on the behavior generations, our method can align the robot's policy with the end user’s preferences: the robot grasps the handle of the fork and gently puts it inside the bowl without dropping.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- R3HF: Reward Redistribution for Enhancing Reinforcement Learning from Human Feedback (2024)
- LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment (2024)
- T-REG: Preference Optimization with Token-Level Reward Regularization (2024)
- Approximated Variational Bayesian Inverse Reinforcement Learning for Large Language Model Alignment (2024)
- ELEMENTAL: Interactive Learning from Demonstrations and Vision-Language Models for Reward Design in Robotics (2024)
- Real-World Offline Reinforcement Learning from Vision Language Model Feedback (2024)
- GRAPE: Generalizing Robot Policy via Preference Alignment (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper