Skip to yearly menu bar Skip to main content

Workshop: Generalization in Planning (GenPlan '23)

Learning Generalizable Symbolic Options for Transfer in Reinforcement Learning

Rashmeet Kaur Nayyar · Shivanshu Verma · Siddharth Srivastava

Keywords: [ transfer ] [ generalization ] [ Learning abstractions ] [ hierarchical reinforcement learning ] [ Option Discovery ]


This paper presents a new approach for Transfer Reinforcement Learning (RL) for Stochastic Shortest Path (SSP) problems in factored domains with unknown transition functions. We take as input a set of problem instances with sparse reward functions. The presented approach first learns a semantically well-defined state abstraction and then uses this abstraction to invent high-level options, to learn abstract policies for executing them, as well as to create abstract symbolic representations for representing them. Given a new problem instance, our overall approach conducts a novel bi-directional search over the learned option representations while also inventing new options as needed. Our main contributions are approaches for continually learning transferable, generalizable knowledge in the form of symbolically represented options, as well as for integrating search techniques with RL to solve new problems by efficiently composing the learned options. Empirical results show that the resulting approach effectively transfers learned knowledge and achieves superior sample efficiency compared to SOTA methods.

Chat is not available.