Timezone: »

 
Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Jason Yecheng Ma · Shagun Sodhani · Dinesh Jayaraman · Osbert Bastani · Vikash Kumar · Amy Zhang

Fri Dec 09 09:15 AM -- 09:30 AM (PST) @
Event URL: https://openreview.net/forum?id=uY-w8sovUa3 »
Reward and representation learning are two long-standing challenges for learning an expanding set of robot manipulation skills from sensory observations. Given the inherent cost and scarcity of in-domain, task-specific robot data, learning from large, diverse, offline human videos has emerged as a promising path towards acquiring a generally useful visual representation for control; however, how these human videos can be used for general-purpose reward learning remains an open question. We introduce $\textbf{V}$alue-$\textbf{I}$mplicit $\textbf{P}$re-training (VIP), a self-supervised pre-trained visual representation capable of generating dense and smooth reward functions for unseen robotic tasks. VIP casts representation learning from human videos as an offline goal-conditioned reinforcement learning problem and derives a self-supervised dual goal-conditioned value-function objective that does not depend on actions, enabling pre-training on unlabeled human videos. Theoretically, VIP can be understood as a novel implicit time contrastive objective that generates a temporally smooth embedding, enabling the value function to be implicitly defined via the embedding distance, which can then be used to construct the reward for any goal-image specified downstream task. Trained on large-scale Ego4D human videos and without any fine-tuning on in-domain, task-specific data, VIP's frozen representation can provide dense visual reward for an extensive set of simulated and real-robot tasks, enabling diverse reward-based visual control methods and significantly outperforming all prior pre-trained representations. Notably, VIP can enable simple, few-shot offline RL on a suite of real-world robot tasks with as few as 20 trajectories.

Author Information

Jason Yecheng Ma (University of Pennsylvania)
Shagun Sodhani (Facebook)
Dinesh Jayaraman (University of Pennsylvania)

I am an assistant professor at UPenn’s GRASP lab. I lead the Perception, Action, and Learning (PAL) Research Group, where we work on problems at the intersection of computer vision, machine learning, and robotics.

Osbert Bastani (University of Pennsylvania)
Vikash Kumar (FAIR, Meta-AI)
Vikash Kumar

I am currently a research scientist at Facebook AI Research (FAIR). I have also spent some time at Google-Brain, OpenAI and Berkeley Artificial Intelligence Research (BAIR) Lab. I did my PhD at CSE, University of Washington's Movement Control Lab, under the supervision of Prof. Emanuel Todorov and Prof. Sergey Levine. I am interested in the areas of Robotics, and Embodied Artificial Intelligence. My general interest lies in developing artificial agents that are cheap, portable and exhibit complex behaviors.

Amy Zhang (Facebook, UC Berkeley)

More from the Same Authors