Timezone: »

 
Poster
Discovery of Useful Questions as Auxiliary Tasks
Vivek Veeriah · Matteo Hessel · Zhongwen Xu · Janarthanan Rajendran · Richard L Lewis · Junhyuk Oh · Hado van Hasselt · David Silver · Satinder Singh

Wed Dec 11 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #216

Arguably, intelligent agents ought to be able to discover their own questions so that in learning answers for them they learn unanticipated useful knowledge and skills; this departs from the focus in much of machine learning on agents learning answers to externally defined questions. We present a novel method for a reinforcement learning (RL) agent to discover questions formulated as general value functions or GVFs, a fairly rich form of knowledge representation. Specifically, our method uses non-myopic meta-gradients to learn GVF-questions such that learning answers to them, as an auxiliary task, induces useful representations for the main task faced by the RL agent. We demonstrate that auxiliary tasks based on the discovered GVFs are sufficient, on their own, to build representations that support main task learning, and that they do so better than popular hand-designed auxiliary tasks from the literature. Furthermore, we show, in the context of Atari2600 videogames, how such auxiliary tasks, meta-learned alongside the main task, can improve the data efficiency of an actor-critic agent.

Author Information

Vivek Veeriah (University of Michigan)
Matteo Hessel (Google DeepMind)
Zhongwen Xu (DeepMind)
Janarthanan Rajendran (University of Michigan)
Richard L Lewis (University of Michigan)
Junhyuk Oh (DeepMind)
Hado van Hasselt (DeepMind)
David Silver (DeepMind)
Satinder Singh (University of Michigan)

More from the Same Authors