Timezone: »

VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Jason Yecheng Ma · Shagun Sodhani · Dinesh Jayaraman · Osbert Bastani · Vikash Kumar · Amy Zhang

We introduce Value-Implicit Pre-training (VIP), a self-supervised pre-trained visual representation capable of generating dense and smooth reward functions for unseen robotic tasks. VIP casts representation learning from human videos as an offline goal-conditioned reinforcement learning problem and derives a self-supervised dual goal-conditioned value-function objective that does not depend on actions, enabling pre-training on unlabeled human videos. Theoretically, VIP can be understood as a novel implicit time contrastive learning that makes for temporally smooth embedding that enables the value function to be implicitly defined via the embedding distance, which can be used as the reward function for any downstream task specified through goal images. Trained on large-scale Ego4D human videos and without any fine-tuning on task-specific robot data, VIP's frozen representation can provide dense visual reward for an extensive set of simulated and real-robot tasks, enabling diverse reward-based policy learning methods, including visual trajectory optimization and online/offline RL, and significantly outperform all prior pre-trained representations. Notably, VIP can enable few-shot offline RL on a suite of real-world robot tasks with as few as 20 trajectories.

Author Information

Jason Yecheng Ma (University of Pennsylvania)
Shagun Sodhani (Facebook)
Dinesh Jayaraman (University of Pennsylvania)

I am an assistant professor at UPenn’s GRASP lab. I lead the Perception, Action, and Learning (PAL) Research Group, where we work on problems at the intersection of computer vision, machine learning, and robotics.

Osbert Bastani (University of Pennsylvania)
Vikash Kumar (UW, CSE)
Amy Zhang (Facebook, UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors