Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ Grand Ballroom A
Hierarchical Reinforcement Learning
Andrew G Barto · Doina Precup · Shie Mannor · Tom Schaul · Roy Fox · Carlos Florensa





Workshop Home Page

Reinforcement Learning (RL) has become a powerful tool for tackling complex sequential decision-making problems. It has been shown to train agents to reach super-human capabilities in game-playing domains such as Go and Atari. RL can also learn advanced control policies in high-dimensional robotic systems. Nevertheless, current RL agents have considerable difficulties when facing sparse rewards, long planning horizons, and more generally a scarcity of useful supervision signals. Unfortunately, the most valuable control tasks are specified in terms of high-level instructions, implying sparse rewards when formulated as an RL problem. Internal spatio-temporal abstractions and memory structures can constrain the decision space, improving data efficiency in the face of scarcity, but are likewise challenging for a supervisor to teach.

Hierarchical Reinforcement Learning (HRL) is emerging as a key component for finding spatio-temporal abstractions and behavioral patterns that can guide the discovery of useful large-scale control architectures, both for deep-network representations and for analytic and optimal-control methods. HRL has the potential to accelerate planning and exploration by identifying skills that can reliably reach desirable future states. It can abstract away the details of low-level controllers to facilitate long-horizon planning and meta-learning in a high-level feature space. Hierarchical structures are modular and amenable to separation of training efforts, reuse, and transfer. By imitating a core principle of human cognition, hierarchies hold promise for interpretability and explainability.

There is a growing interest in HRL methods for structure discovery, planning, and learning, as well as HRL systems for shared learning and policy deployment. The goal of this workshop is to improve cohesion and synergy among the research community and increase its impact by promoting better understanding of the challenges and potential of HRL. This workshop further aims to bring together researchers studying both theoretical and practical aspects of HRL, for a joint presentation, discussion, and evaluation of some of the numerous novel approaches to HRL developed in recent years.

Opening Remarks
Deep Reinforcement Learning with Subgoals (David Silver) (Invited Talk)
Landmark Options Via Reflection (LOVR) in Multi-task Lifelong Reinforcement Learning (Nicholas Denis) (Contributed Talk)
Crossmodal Attentive Skill Learner (Shayegan Omidshafiei) (Contributed Talk)
HRL with gradient-based subgoal generators, asymptotically optimal incremental problem solvers, various meta-learners, and PowerPlay (Jürgen Schmidhuber) (Invited Talk)
Meta-Learning Shared Hierarchies (Pieter Abbeel) (Invited Talk)
Best Paper Award and Talk — Learning with options that terminate off-policy (Anna Harutyunyan) (Contributed Talk)
Spotlights & Poster Session (Poster Session)
Lunch Break (Break)
Hierarchical Imitation and Reinforcement Learning for Robotics (Jan Peters) (Invited Talk)
Deep Abstract Q-Networks (Melrose Roderick) (Contributed Talk)
Federated Control with Hierarchical Multi-Agent Deep Reinforcement Learning (Saurabh Kumar) (Contributed Talk)
Effective Master-Slave Communication On A Multi-Agent Deep Reinforcement Learning System (Xiangyu Kong) (Contributed Talk)
Sample efficiency and off policy hierarchical RL (Emma Brunskill) (Invited Talk)
Coffee Break (Break)
Applying variational information bottleneck in hierarchical domains (Matt Botvinick) (Invited Talk)
Progress on Deep Reinforcement Learning with Temporal Abstraction (Doina Precup) (Invited Talk)
Panel Discussion
Poster Session