Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Goal-Conditioned Reinforcement Learning

Hierarchical Empowerment: Toward Tractable Empowerment-Based Skill Learning

Andrew Levy · Sreehari Rammohan · Alessandro Allievi · Scott Niekum · George Konidaris

Keywords: [ empowerment ] [ goal-conditioned reinforcement learning ] [ curriculum learning ] [ hierarchical reinforcement learning ] [ skill learning ]


Abstract:

General purpose agents will require large repertoires of skills. Empowerment---the maximum mutual information between skills and states---provides a pathway for learning large collections of distinct skills, but mutual information is difficult to optimize. We introduce a new framework, Hierarchical Empowerment, that makes computing empowerment more tractable by integrating concepts from Goal-Conditioned Hierarchical Reinforcement Learning. Our framework makes two specific contributions. First, we introduce a new variational lower bound on mutual information that can be used to compute empowerment over short horizons. Second, we introduce a hierarchical architecture for computing empowerment over exponentially longer time scales. We verify the contributions of the framework in a series of simulated robotics tasks. In a popular ant navigation domain, our four level agents are able to learn skills that cover a surface area over two orders of magnitude larger than prior work.

Chat is not available.