Skip to yearly menu bar Skip to main content


Spotlight Poster

A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning

Arthur Juliani · Jordan Ash

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Continual learning with deep neural networks presents challenges distinct from both the fixed-dataset and the convex continual learning regimes. One such challenge is the phenomenon of plasticity loss, wherein a neural network trained in an online fashion displays a degraded ability to fit new tasks. This problem has been extensively studied in the supervised learning and off-policy reinforcement learning (RL) settings, where a number of remedies have been proposed. In contrast, plasticity loss has received relatively less attention in the on-policy deep RL setting. Here we perform an extensive set of experiments examining plasticity loss and a variety of mitigation methods in on-policy deep RL. We demonstrate that plasticity loss also exists in this setting, and that a number of methods developed to resolve it in other settings fail, sometimes even resulting in performance that worse than performing no intervention at all. In contrast, we find that a class of "regenerative" methods are able to consistently mitigate plasticity loss in a variety of contexts. We find that in particular a continual version of shrink+perturb initialization, originally made to remedy the closely related "warm-start problem" studied in supervised learning, is able to consistently resolve plasticity loss in both gridworld tasks and more challenging environments drawn from the ProcGen and ALE RL benchmarks.

Live content is unavailable. Log in and register to view live content