`

Timezone: »

 
Understanding and Preventing Capacity Loss in Reinforcement Learning
Clare Lyle · Mark Rowland · Will Dabney
Event URL: https://openreview.net/forum?id=5G7fT_tJTt »

The reinforcement learning (RL) problem is rife with sources of non-stationaritythat can destabilize or inhibit learning progress. We identify a key mechanismby which this occurs in agents using neural networks as function approximators:capacity loss, whereby networks trained to predict a sequence of target values losetheir ability to quickly fit new functions over time. We demonstrate that capacityloss occurs in a broad range of RL agents and environments, and is particularlydamaging to learning progress in sparse-reward tasks. We then present a simpleregularizer, Initial Feature Regularization (InFeR), that mitigates this phenomenonby regressing a subspace of features towards its value at initialization, improvingperformance over a state-of-the-art model-free algorithm in the Atari 2600 suite.Finally, we study how this regularization affects different notions of capacity andevaluate other mechanisms by which it may improve performance.

Author Information

Clare Lyle (University of Oxford)
Mark Rowland (DeepMind)
Will Dabney (DeepMind)

More from the Same Authors