Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generalization in Planning (GenPlan '23)

Inductive Generalization in Reinforcement Learning from Specifications

Rohit kushwah · Vignesh Subramanian · Suguman Bansal · Subhajit Roy

Keywords: [ generalization ] [ Reinforcement Learning ] [ Logical Specifications ]


Abstract:

Reinforcement Learning (RL) from logical specifications is a promising approach to learning control policies for complex long-horizon tasks. While these algorithms showcase remarkable scalability and efficiency in learning, a persistent hurdle lies in their limited ability to generalize the policies they generate. In this work, we present an inductive framework to improve policy generalization from logical specifications. We observe that logical specifications can be used to define a class of inductive tasks known as repeated tasks. These are tasks with similar overarching goals but differing inductively in low-level predicates and distributions. Hence, policies for repeated tasks should also be inductive. To this end, we present a compositional approach that learns policies for unseen repeated tasks by training on few repeated tasks only. Our approach is evaluated on challenging control benchmarks with continuous state and action spaces, showing promising results in handling long-horizon tasks with improved generalization.

Chat is not available.