Timezone: »
Goal-conditioned hierarchical reinforcement learning (HRL) is a promising approach for scaling up reinforcement learning (RL) techniques. However, it often suffers from training inefficiency as the action space of the high-level, i.e., the goal space, is often large. Searching in a large goal space poses difficulties for both high-level subgoal generation and low-level policy learning. In this paper, we show that this problem can be effectively alleviated by restricting the high-level action space from the whole goal space to a k-step adjacent region of the current state using an adjacency constraint. We theoretically prove that the proposed adjacency constraint preserves the optimal hierarchical policy in deterministic MDPs, and show that this constraint can be practically implemented by training an adjacency network that can discriminate between adjacent and non-adjacent subgoals. Experimental results on discrete and continuous control tasks show that incorporating the adjacency constraint improves the performance of state-of-the-art HRL approaches in both deterministic and stochastic environments.
Author Information
Tianren Zhang (Tsinghua University)
Shangqi Guo (Tsinghua University)
Tian Tan (Stanford University)
Xiaolin Hu (Tsinghua University)
Feng Chen (Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning »
Wed. Dec 9th 04:00 -- 04:10 AM Room Orals & Spotlights: Reinforcement Learning
More from the Same Authors
-
2020 Poster: Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection »
Xiang Li · Wenhai Wang · Lijun Wu · Shuo Chen · Xiaolin Hu · Jun Li · Jinhui Tang · Jian Yang -
2019 Poster: Convolution with even-sized kernels and symmetric padding »
Shuang Wu · Guanrui Wang · Pei Tang · Feng Chen · Luping Shi -
2013 Poster: Variational Planning for Graph-based MDPs »
Qiang Cheng · Qiang Liu · Feng Chen · Alexander Ihler