Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL) Workshop

Codeplay: Autotelic Learning through Collaborative Self-Play in Programming Environments

Laetitia Teodorescu · Cédric Colas · Matthew Bowers · Thomas Carta · Pierre-Yves Oudeyer

Keywords: [ problem generation ] [ language models ] [ Deep Reinforcement Learning ] [ Program Synthesis ] [ self-play ] [ autotelic learning ] [ intrinsic motivation ]


Abstract:

Autotelic learning is the training setup where agents learn by setting their own goals and trying to achieve them. However, creatively generating freeform goals is challenging for autotelic agents. We present Codeplay, an algorithm casting autotelic learning as a game between a Setter agent and a Solver agent, where the Setter generates programming puzzles of appropriate difficulty and novelty for the solver and the Solver learns to achieve them. Early experiments with the Setter demonstrates one can effectively control the tradeoff between difficulty of a puzzle and its novelty by tuning the reward of the Setter, a code language model finetuned with deep reinforcement learning.

Chat is not available.