Timezone: »

 
Spotlight
VIREL: A Variational Inference Framework for Reinforcement Learning
Mattie Fellows · Anuj Mahajan · Tim G. J. Rudner · Shimon Whiteson

Wed Dec 11 10:25 AM -- 10:30 AM (PST) @ West Exhibition Hall A

Applying probabilistic models to reinforcement learning (RL) enables the uses of powerful optimisation tools such as variational inference in RL. However, existing inference frameworks and their algorithms pose significant challenges for learning optimal policies, e.g., the lack of mode capturing behaviour in pseudo-likelihood methods, difficulties learning deterministic policies in maximum entropy RL based approaches, and a lack of analysis when function approximators are used. We propose VIREL, a theoretically grounded probabilistic inference framework for RL that utilises a parametrised action-value function to summarise future dynamics of the underlying MDP, generalising existing approaches. VIREL also benefits from a mode-seeking form of KL divergence, the ability to learn deterministic optimal polices naturally from inference, and the ability to optimise value functions and policies in separate, iterative steps. In applying variational expectation-maximisation to VIREL, we thus show that the actor-critic algorithm can be reduced to expectation-maximisation, with policy improvement equivalent to an E-step and policy evaluation to an M-step. We then derive a family of actor-critic methods fromVIREL, including a scheme for adaptive exploration. Finally, we demonstrate that actor-critic algorithms from this family outperform state-of-the-art methods based on soft value functions in several domains.

Author Information

Mattie Fellows (University of Oxford)
Anuj Mahajan (University of Oxford)

Anuj is doing a PhD in machine learning at the University of Oxford. His research focuses on using deep learning, probabilistic inference and optimisation methods for single and multi-agent reinforcement learning. Anuj has done his undergrad in Computer Science from the Indian Institute of Technology, Delhi. His PhD is funded by the Google DeepMind Scholarship and Drapers Scholarship.

Tim G. J. Rudner (University of Oxford)

Tim G. J. Rudner is a Computer Science PhD student at the University of Oxford supervised by Yarin Gal and Yee Whye Teh. His research interests span Bayesian deep learning, reinforcement learning, and variational inference. He obtained a master’s degree in statistics from the University of Oxford and an undergraduate degree in mathematics and economics from Yale University. Tim is also a Rhodes Scholar and a Fellow of the German National Academic Foundation.

Shimon Whiteson (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors