Timezone: »
Learning auxiliary tasks, such as multiple predictions about the world, can provide many benefits to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather useful data for those off-policy predictions. In this work, we investigate a reinforcement learning system designed to learn a collection of auxiliary tasks, with a behavior policy learning to take actions to improve those auxiliary predictions. We highlight the inherent non-stationarity in this continual auxiliary task learning problem, for both prediction learners and the behavior learner. We develop an algorithm based on successor features that facilitates tracking under non-stationary rewards, and prove the separation into learning successor features and rewards provides convergence rate improvements. We conduct an in-depth study into the resulting multi-prediction learning system.
Author Information
Matthew McLeod (University of Alberta)
Chunlok Lo (University of Alberta)
Matthew Schlegel (University of Alberta)
An AI and coffee enthusiast with research experience in RL and ML. Currently pursuing a PhD at the University of Alberta! Excited about off-policy policy evaluation, general value functions, understanding the behavior of artificial neural networks, and cognitive science (specifically cognitive neuroscience).
Andrew Jacobsen (University of Alberta)
I am made completely out of human body parts
Raksha Kumaraswamy (University of Alberta)
Martha White
Adam White (University of Alberta; DeepMind)
More from the Same Authors
-
2023 Poster: General Munchausen Reinforcement Learning with Tsallis Kullback-Leibler Divergence »
Lingwei Zhu · Zheng Chen · Matthew Schlegel · Martha White -
2021 Poster: Structural Credit Assignment in Neural Networks using Reinforcement Learning »
Dhawal Gupta · Gabor Mihucz · Matthew Schlegel · James Kostas · Philip Thomas · Martha White -
2020 Tutorial: (Track3) Policy Optimization in Reinforcement Learning Q&A »
Sham M Kakade · Martha White · Nicolas Le Roux -
2020 Tutorial: (Track3) Policy Optimization in Reinforcement Learning »
Sham M Kakade · Martha White · Nicolas Le Roux -
2019 Poster: Importance Resampling for Off-policy Prediction »
Matthew Schlegel · Wesley Chung · Daniel Graves · Jian Qian · Martha White -
2018 Poster: Context-dependent upper-confidence bounds for directed exploration »
Raksha Kumaraswamy · Matthew Schlegel · Adam White · Martha White -
2017 Poster: Multi-view Matrix Factorization for Linear Dynamical System Estimation »
Mahdi Karami · Martha White · Dale Schuurmans · Csaba Szepesvari