Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generalization in Planning (GenPlan '23)

Exploiting Contextual Structure to Generate Useful Auxiliary Tasks

Benedict Quartey · Ankit Shah · George Konidaris

Keywords: [ auxiliary task generation ] [ Large language models ] [ Reinforcement Learning ] [ off-policy learning ]


Abstract:

Reinforcement learning requires interaction with environments, which can be prohibitively expensive, especially in robotics. This constraint necessitates approaches that work with limited environmental interaction by maximizing the reuse of previous experiences. We propose an approach that maximizes experience reuse while learning to solve a given task by generating and simultaneously learning useful auxiliary tasks. To generate these tasks, we construct an abstract temporal logic representation of the given task and leverage large language models to generate context-aware object embeddings that facilitate object replacements. Counterfactual reasoning and off-policy methods allow us to simultaneously learn these auxiliary tasks while solving the given target task. We combine these insights into a novel framework for multitask reinforcement learning and experimentally show that our generated auxiliary tasks share similar underlying exploration requirements as the given task, thereby maximizing the utility of directed exploration. Our approach allows agents to automatically learn additional useful policies without extra environment interaction.

Chat is not available.