Papers
arxiv:2412.04835

Maximizing Alignment with Minimal Feedback: Efficiently Learning Rewards for Visuomotor Robot Policy Alignment

Published on Dec 6
· Submitted by thomasrantian on Dec 11
Authors:
,
,

Abstract

Visuomotor robot policies, increasingly pre-trained on large-scale datasets, promise significant advancements across robotics domains. However, aligning these policies with end-user preferences remains a challenge, particularly when the preferences are hard to specify. While reinforcement learning from human feedback (RLHF) has become the predominant mechanism for alignment in non-embodied domains like large language models, it has not seen the same success in aligning visuomotor policies due to the prohibitive amount of human feedback required to learn visual reward functions. To address this limitation, we propose Representation-Aligned Preference-based Learning (RAPL), an observation-only method for learning visual rewards from significantly less human preference feedback. Unlike traditional RLHF, RAPL focuses human feedback on fine-tuning pre-trained vision encoders to align with the end-user's visual representation and then constructs a dense visual reward via feature matching in this aligned representation space. We first validate RAPL through simulation experiments in the X-Magical benchmark and Franka Panda robotic manipulation, demonstrating that it can learn rewards aligned with human preferences, more efficiently uses preference data, and generalizes across robot embodiments. Finally, our hardware experiments align pre-trained Diffusion Policies for three object manipulation tasks. We find that RAPL can fine-tune these policies with 5x less real human preference data, taking the first step towards minimizing human feedback while maximizing visuomotor robot policy alignment.

Community

Paper author Paper submitter

Our paper, “Maximizing Alignment with Minimal Feedback: Efficiently Learning Rewards for Visuomotor Robot Policy Alignment,” aims to bring the success of preference alignment popularized in non-embodied foundation models (e.g., LLMs) to visuomotor robotics models. This transition is challenging due to the extensive amount of human feedback required to learn visual reward functions. To address this, we propose an observation-only, data-efficient approach for aligning visuomotor policies with end-user preferences.

In this submission, we demonstrate how our method can alleviate human feedback burden while still ensuring high-quality robot visuomotor policy alignment. We aim to inspire future research on aligning next-generation visuomotor policies with user needs while significantly reducing the human labeling burden.

The attached is a video example. Here, we want the robot to help us pick up a fork and put it in the bowl. The robot’s visuomotor policy is trained to imitate diverse teleoperators; thus, the pre-trained policy frequently makes contact with the tines of the fork and drops it out of the bowl. With just 20 human preference labels on the behavior generations, our method can align the robot's policy with the end user’s preferences: the robot grasps the handle of the fork and gently puts it inside the bowl without dropping.

fork_example_small.gif

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.04835 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.04835 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.04835 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.