Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Agent Learning in Open-Endedness Workshop

t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making

William Yue · Bo Liu · Peter Stone

Keywords: [ continual learning ] [ Reasoning under uncertainty ] [ machine learning ] [ lifelong learning ] [ imitation learning ] [ decision making ]


Abstract:

Deep generative replay has emerged as a promising approach for continual learning in decision-making tasks. This approach addresses the problem of catastrophic forgetting by leveraging the generation of trajectories from previously encountered tasks to augment the current dataset. However, existing deep generative replay methods for continual learning rely on autoregressive models, which suffer from compounding errors in the generated trajectories. In this paper, we propose a simple, scalable, and non-autoregressive method for continual learning in decision-making tasks using a diffusion model that generates task samples conditioned on the trajectory timestep. We evaluate our method on Continual World benchmarks and find that our approach achieves state-of-the-art performance on the average success rate metric compared to other continual learning methods.

Chat is not available.