Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Agent Learning in Open-Endedness Workshop

LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers

Taewook Nam · Juyong Lee · Jesse Zhang · Sung Ju Hwang · Joseph Lim · Karl Pertsch

Keywords: [ Unsupervised Reinforcement Learning ] [ Foundation Model ] [ Reinforcement Learning ] [ Open-ended Learning ] [ large language model ]


Abstract:

We propose a framework that leverages foundation models as teachers, guiding a reinforcement learning agent to acquire semantically meaningful behavior without human intervention.In our framework, the agent receives task instructions grounded in a training environment from large language models.Then, a vision-language model guides the agent in learning the tasks by providing reward feedback.We demonstrate that our method can learn semantically meaningful skills in a challenging open-ended MineDojo environment, while prior works on unsupervised skill discovery methods struggle.Additionally, we discuss the observed challenges of using off-the-shelf foundation models as teachers and our efforts to address them.

Chat is not available.