Skip to yearly menu bar Skip to main content


Poster
in
Workshop: LaReL: Language and Reinforcement Learning

LAD: Language Augmented Diffusion for Reinforcement Learning

Edwin Zhang · Yujie Lu · William Yang Wang · Amy Zhang

Keywords: [ RL ] [ Planning ] [ diffusion ] [ language ] [ Robotics ]


Abstract:

Learning skills from language potentially provides a powerful avenue for generalization in RL, although it remains a challenging task as it requires agents to capture the complex interdependencies between language, actions and states, also known as language grounding. In this paper, we propose leveraging Language Augmented Diffusion models as a language-to-plan generator (LAD). We demonstrate comparable performance of LAD with the state of the art on the CALVIN benchmark with a much simpler architecture and conduct an analysis on the properties of language conditioned diffusion in reinforcement learning.

Chat is not available.