PROGRESSOR: A Perceptually Guided Reward Estimator with Self-Supervised Online Refinement

Tewodros W. Ayalew

The University of Chicago

Xiao Zhang *

The University of Chicago

Kevin Yuanbo Wu *

The University of Chicago

Tianchong Jiang

Toyota Technological Institute at Chicago

Michael Maire

The University of Chicago

Matthew Walter

Toyota Technological Institute at Chicago

*Equal contribution


Abstract

We present PROGRESSOR, a novel framework that learns a task-agnostic reward function from videos, enabling policy training through goal-conditioned reinforcement learning (RL) without manual supervision. Underlying this reward is an estimate of the distribution over task progress as a function of the current, initial, and goal observations that is learned in a self-supervised fashion. Crucially, PROGRESSOR refines rewards adversarially during online RL training by pushing back predictions for out-of-distribution observations, to mitigate distribution shift inherent in non-expert observations. Utilizing this progress prediction as a dense reward together with an adversarial push-back, we show that PROGRESSOR enables robots to learn complex behaviors without any external supervision. Pretrained on large-scale egocentric human video from EPIC-KITCHENS, PROGRESSOR requires no fine-tuning on in-domain task-specific data for generalization to real-robot offline RL under noisy demonstrations, outperforming contemporary methods that provide dense visual reward for robotic learning. Our findings highlight the potential of PROGRESSOR for scalable robotic applications where direct action labels and task-specific rewards are not readily available.

Method

We propose to learn a unified reward model via an encoder that estimates the relative progress of an observation oj\mathcal{o}_{j} with respect to an initial observation oi\mathcal{o}_{i} and a goal observation og\mathcal{o}_{g}, all of which are purely pixel-based.

Learning the Self-Supervised Reward Model


We optimize our reward model rθ\mathcal{r}_{\theta} to predict the distribution of the progress on expert trajectory. We use a shared visual encoder to compute the per-frame representation, followed by several MLPs to produce the final estimation:

Eθ(oi,oj,og)=N(μ,σ2)E_{\theta}(\mathcal{o}_{i}, \mathcal{o}_{j}, \mathcal{o}_{g})= \mathcal{N}\left(\mu, \sigma^2\right)

Using the Reward Model in Online RL

We create the reward model by defining a function derived from the model’s predicted outputs given a sample of frame triplet ((oi,oj,og)(\mathcal{o}_{i}, \mathcal{o}_{j}, \mathcal{o}_{g})) of trajectory as:

rθ(oi,oj,og)=μαH(N(μ,σ2))\mathcal{r}_{\theta}(\mathcal{o}_{i}, \mathcal{o}_{j}, \mathcal{o}_{g})= \mu - \alpha \mathcal{H}(\mathcal{N}(\mu, \sigma^2))

Adversarial Online Refinement via Push-Back

To tackle this distribution shift, we implement an adversarial online refinement strategy, which we refer to as “push-back”, that enables the reward model rθ\mathcal{r}_{\theta} to differentiate between in- and out-of-distribution, τ and τ\tau \ and\ \tau'. for a frame triplet oiτk,ojτk,ogτk\mathcal{o}_i^{\tau_k'}, \mathcal{o}_j^{\tau_k'}, \mathcal{o}_g^{\tau_k'} sampled from τk\tau_k' and the estimated progress μτk\mu_{\tau_k'} from EθE_{\theta}, we update EθE_{\theta} so that it learns to push-back the current estimation as βμτk\beta\mu_{\tau_k'} with β[0,1]\beta \in [0,1] as the decay factor.


During online training, we fine-tune EθE_{\theta} using hybrid objectives:


Experimental Evaluation

We evaluate the effectiveness with which PROGRESSOR learns reward functions from visual demonstrations that enable robots to perform various manipulation tasks in simulation as well as the real world.

Visualization of the robotic tasks
Visualization of the robotic tasks: (a-d) Real world environments with a UR5 arm. (e-j) Simulation environments for evaluation using the Meta-World benchmark.

Simulated Experiments

In our simulated experiments, we used benchmark tasks from the Meta-World environment, selecting six table-top manipulation tasks: door-open, drawer-open, hammer, peg-insert-side, pick-place, and reach.

Experiments
Experiments
legend
Visualization of policy learning in the Meta-World simulation environment. We run PROGRESSOR and several baselines on six diverse tasks of various difficulties. We also run PROGRESSOR without online push-back as an ablation. We report the environment reward during training (left) and the task success rate from 10 rollouts (right) averaged over five seeds. The solid line denotes the mean and the transparent area denotes standard deviation. PROGRESSOR demonstrates clear advantages in both metrics, especially at early stages of training.

Real-World Robotic Experiments

Pretraining on Kitchen Dataset

We randomly sample frame triplets triplet ((oi,oj,og)(\mathcal{o}_{i}, \mathcal{o}_{j}, \mathcal{o}_{g})) from the videos ensuring a maximal frame gap gi2000|g-i| ≤ 2000.


Real-World Few-Shot Offline Reinforcement Learning with Noisy Demonstrations

We compare PROGRESSOR with R3M and VIP by freezing the pre-trained models and using them as reward prediction models to train RWR-ACT on downstream robotic learning tasks.


Zero-shot Reward Estimation For In-domain and Out-domain Videos


real-world experiments
real-world experiments
real-world experiments
real-world experiments

BibTeX citation

    @misc{ayalew2024progressorperceptuallyguidedreward,
      title={PROGRESSOR: A Perceptually Guided Reward Estimator with Self-Supervised Online Refinement}, 
      author={Tewodros Ayalew and Xiao Zhang and Kevin Yuanbo Wu and Tianchong Jiang and Michael Maire and Matthew R. Walter},
      year={2024},
      eprint={2411.17764},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2411.17764}, 
}