Skip to yearly menu bar Skip to main content


Poster

Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset

Alexandre Galashov · Michalis Titsias · Razvan Pascanu · Clare Lyle · András György · Yee Whye Teh · Maneesh Sahani

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Neural networks are traditionally trained under the assumption that data come from a stationary distribution. However settings which violate this assumption are becoming more popular; examples include supervised learning under distributional shifts, reinforcement learning, continual learning and non-stationary contextual bandits. In this work we introduce a novel learning approach that automatically models and adapts to non-stationarity, via an Ornstein-Uhlenbeck process with an adaptive drift parameter. The adaptive drift tends to draw the parameters towards the initialisation distribution, so the approach can be understood as a form of soft parameter reset. We show empirically that our approach performs well in non-stationary supervised and off-policy reinforcement learning settings

Live content is unavailable. Log in and register to view live content