Program Highlights »
Workshop
Sat Dec 9th 08:00 AM -- 06:30 PM @ Grand Ballroom A
Hierarchical Reinforcement Learning
Andrew G Barto · Doina Precup · Shie Mannor · Tom Schaul · Roy Fox · Carlos Florensa Campo





Workshop Home Page

Reinforcement Learning (RL) has become a powerful tool for tackling complex sequential decision-making problems. It has been shown to train agents to reach super-human capabilities in game-playing domains such as Go and Atari. RL can also learn advanced control policies in high-dimensional robotic systems. Nevertheless, current RL agents have considerable difficulties when facing sparse rewards, long planning horizons, and more generally a scarcity of useful supervision signals. Unfortunately, the most valuable control tasks are specified in terms of high-level instructions, implying sparse rewards when formulated as an RL problem. Internal spatio-temporal abstractions and memory structures can constrain the decision space, improving data efficiency in the face of scarcity, but are likewise challenging for a supervisor to teach.

Hierarchical Reinforcement Learning (HRL) is emerging as a key component for finding spatio-temporal abstractions and behavioral patterns that can guide the discovery of useful large-scale control architectures, both for deep-network representations and for analytic and optimal-control methods. HRL has the potential to accelerate planning and exploration by identifying skills that can reliably reach desirable future states. It can abstract away the details of low-level controllers to facilitate long-horizon planning and meta-learning in a high-level feature space. Hierarchical structures are modular and amenable to separation of training efforts, reuse, and transfer. By imitating a core principle of human cognition, hierarchies hold promise for interpretability and explainability.

There is a growing interest in HRL methods for structure discovery, planning, and learning, as well as HRL systems for shared learning and policy deployment. The goal of this workshop is to improve cohesion and synergy among the research community and increase its impact by promoting better understanding of the challenges and potential of HRL. This workshop further aims to bring together researchers studying both theoretical and practical aspects of HRL, for a joint presentation, discussion, and evaluation of some of the numerous novel approaches to HRL developed in recent years.

09:00 AM Opening Remarks
Roy Fox
09:10 AM Deep Reinforcement Learning with Subgoals (David Silver) (Invited Talk)
David Silver
09:40 AM Landmark Options Via Reflection (LOVR) in Multi-task Lifelong Reinforcement Learning (Nicholas Denis) (Contributed Talk)
Nicholas Denis
09:50 AM Crossmodal Attentive Skill Learner (Shayegan Omidshafiei) (Contributed Talk)
Shayegan Omidshafiei
10:00 AM HRL with gradient-based subgoal generators, asymptotically optimal incremental problem solvers, various meta-learners, and PowerPlay (Jürgen Schmidhuber) (Invited Talk)
Jürgen Schmidhuber
11:00 AM Meta-Learning Shared Hierarchies (Pieter Abbeel) (Invited Talk)
Pieter Abbeel
11:30 AM Best Paper Award and Talk — Learning with options that terminate off-policy (Anna Harutyunyan) (Contributed Talk)
Anna Harutyunyan
11:55 AM Spotlights & Poster Session (Poster Session)
Dave Abel, Nicholas Denis, Maria Eckstein, Ronan Fruit, Karan Goel, Joshua Gruenstein, Anna Harutyunyan, Martin Klissarov, Xiangyu Kong, Aviral Kumar, Saurabh Kumar, Miao Liu, Daniel McNamee, Shayegan Omidshafiei, Silviu Pitis, Paulo Rauber, Melrose Roderick, Tianmin Shu, Yizhou Wang, Shangtong Zhang
12:30 PM Lunch Break (Break)
01:30 PM Hierarchical Imitation and Reinforcement Learning for Robotics (Jan Peters) (Invited Talk)
Jan Peters
02:00 PM Deep Abstract Q-Networks (Melrose Roderick) (Contributed Talk)
Melrose Roderick
02:10 PM Federated Control with Hierarchical Multi-Agent Deep Reinforcement Learning (Saurabh Kumar) (Contributed Talk)
Saurabh Kumar
02:20 PM Effective Master-Slave Communication On A Multi-Agent Deep Reinforcement Learning System (Xiangyu Kong) (Contributed Talk)
Xiangyu Kong
02:30 PM Sample efficiency and off policy hierarchical RL (Emma Brunskill) (Invited Talk)
Emma Brunskill
03:00 PM Coffee Break (Break)
03:30 PM Applying variational information bottleneck in hierarchical domains (Matt Botvinick) (Invited Talk)
Matt Botvinick
04:00 PM Progress on Deep Reinforcement Learning with Temporal Abstraction (Doina Precup) (Invited Talk)
Doina Precup
04:30 PM Panel Discussion
Matt Botvinick, Emma Brunskill, Marcos Campos, Jan Peters, Doina Precup, David Silver, Josh Tenenbaum, Roy Fox
05:30 PM Poster Session
Dave Abel, Nicholas Denis, Maria Eckstein, Ronan Fruit, Karan Goel, Joshua Gruenstein, Anna Harutyunyan, Martin Klissarov, Xiangyu Kong, Aviral Kumar, Saurabh Kumar, Miao Liu, Daniel McNamee, Shayegan Omidshafiei, Silviu Pitis, Paulo Rauber, Melrose Roderick, Tianmin Shu, Yizhou Wang, Shangtong Zhang