Timezone: »

 
Poster
Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Zhongwen Xu · Hado van Hasselt · Matteo Hessel · Junhyuk Oh · Satinder Singh · David Silver

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1385

Deep reinforcement learning includes a broad family of algorithms that parameterise an internal representation, such as a value function or policy, by a deep neural network. Each algorithm optimises its parameters with respect to an objective, such as Q-learning or policy gradient, that defines its semantics. In this work, we propose an algorithm based on meta-gradient descent that discovers its own objective, flexibly parameterised by a deep neural network, solely from interactive experience with its environment. Over time, this allows the agent to learn how to learn increasingly effectively. Furthermore, because the objective is discovered online, it can adapt to changes over time. We demonstrate that the algorithm discovers how to address several important issues in RL, such as bootstrapping, non-stationarity, and off-policy learning. On the Atari Learning Environment, the meta-gradient algorithm adapts over time to learn with greater efficiency, eventually outperforming the median score of a strong actor-critic baseline.

Author Information

Zhongwen Xu (DeepMind)
Hado van Hasselt (DeepMind)
Matteo Hessel (Google DeepMind)
Junhyuk Oh (DeepMind)
Satinder Singh (DeepMind)
David Silver (DeepMind)

More from the Same Authors