`

Timezone: »

 
Poster
Information is Power: Intrinsic Control via Information Capture
Nick Rhinehart · Jenny Wang · Glen Berseth · JD Co-Reyes · Danijar Hafner · Chelsea Finn · Sergey Levine

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ None #None

Humans and animals explore their environment and acquire useful skills even in the absence of clear goals, exhibiting intrinsic motivation. The study of intrinsic motivation in artificial agents is concerned with the following question: what is a good general-purpose objective for an agent? We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model. This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states. We instantiate this approach as a deep reinforcement learning agent equipped with a deep variational Bayes filter. We find that our agent learns to discover, represent, and exercise control of dynamic objects in a variety of partially-observed environments sensed with visual observations without extrinsic reward.

Author Information

Nick Rhinehart (University of California, Berkeley)

Nick Rhinehart is a Postdoctoral Scholar in the Electrical Engineering and Computer Science Department at the University of California, Berkeley, where he works with Sergey Levine. His work focuses on fundamental and applied research in machine learning and computer vision for behavioral forecasting and control in complex environments, with an emphasis on imitation learning, reinforcement learning, and deep learning methods. Applications of his work include autonomous vehicles and first-person video. He received a Ph.D. in Robotics from Carnegie Mellon University with Kris Kitani, and B.S. and B.A. degrees in Engineering and Computer Science from Swarthmore College. Nick's work has been honored with a Best Paper Award at the ICML 2019 Workshop on AI for Autonomous Driving and a Best Paper Honorable Mention Award at ICCV 2017. His work has been published at a variety of top-tier venues in machine learning, computer vision, and robotics, including AAMAS, CoRL, CVPR, ECCV, ICCV, ICLR, ICML, ICRA, NeurIPS, and PAMI. Nick co-organized the workshop on Machine Learning in Autonomous Driving at NeurIPS 2019, the workshop on Imitation, Intent, and Interaction at ICML 2019, and the Tutorial on Inverse RL for Computer Vision at CVPR 2018.

Jenny Wang (University of California, Berkeley)
Glen Berseth (University of British Columbia)
JD Co-Reyes (UC Berkeley)

Interested in solving intelligence. Currently working on hierarchical reinforcement learning and learning a physical intuition of the world.

Danijar Hafner (Google)
Chelsea Finn (Stanford University)
Sergey Levine (UC Berkeley)

More from the Same Authors