Timezone: »

 
Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models
Nitish Srivastava · Walter Talbott · Shuangfei Zhai · Joshua Susskind
Event URL: https://openreview.net/forum?id=cRHhPrLcMRg »

Modeling the world can benefit robot learning by providing a rich training signal for shaping an agent's latent state space. However, learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging. One source of difficulty is the presence of irrelevant but hard-to-model background distractions, and unimportant visual details of task-relevant entities. We address this issue by learning a recurrent latent dynamics model which contrastively predicts the next observation. This simple model leads to surprisingly robust robotic control even with simultaneous camera, background, and color distractions. We outperform alternatives such as bisimulation methods which impose state-similarity measures derived from divergence in future reward or future optimal actions. We obtain state-of-the-art results on the Distracting Control Suite, a challenging benchmark for pixel-based robotic control.

Author Information

Nitish Srivastava (Apple Inc)
Walter Talbott (Apple)
Shuangfei Zhai (Apple)
Joshua Susskind (Apple Inc.)

I was an undergraduate in Cognitive Science at UCSD from 1995-2003 (with some breaks). Then I earned a PhD from UofT in machine learning and cognitive neuroscience, with Dr. Geoff Hinton and Dr. Adam Anderson. Following grad school I moved to UCSD for a post-doctoral position. Before coming to Apple I co-founded Emotient in 2012 and led the deep learning effort for facial expression and demographics recognition. Since joining Apple, I led the Face ID neural network team responsible for face recognition, and then started a machine learning research group within the hardware organization focused on fundamental ML technology.

More from the Same Authors