`

Timezone: »

 
Poster
VIME: Variational Information Maximizing Exploration
Rein Houthooft · Xi Chen · Peter Chen · Yan Duan · John Schulman · Filip De Turck · Pieter Abbeel

Tue Dec 06 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #117 #None

Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.

Author Information

Rein Houthooft (Ghent University - iMinds and UC Berkeley and OpenAI)
Xi Chen (UC Berkeley and OpenAI)

Xi Chen is an associate professor with tenure at Stern School of Business at New York University, who is also an affiliated professor to Computer Science and Center for Data Science. Before that, he was a Postdoc in the group of Prof. Michael Jordan at UC Berkeley. He obtained his Ph.D. from the Machine Learning Department at Carnegie Mellon University (CMU). He studies high-dimensional statistical learning, online learning, large-scale stochastic optimization, and applications to operations. He has published more than 20 journal articles in statistics, machine learning, and operations, and 30 top machine learning peer-reviewed conference proceedings. He received NSF Career Award, ICSA Outstanding Young Researcher Award, Faculty Research Awards from Google, Adobe, Alibaba, and Bloomberg, and was featured in Forbes list of “30 Under30 in Science”.

Peter Chen (UC Berkeley and OpenAI)
Yan Duan (UC Berkeley)
John Schulman (OpenAI)
Filip De Turck (Ghent University - iMinds)
Pieter Abbeel (OpenAI / UC Berkeley / Gradescope)

Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

More from the Same Authors