Timezone: »

 
Tutorial
Variational Inference: Foundations and Modern Methods
David Blei · Shakir Mohamed · Rajesh Ranganath

Sun Dec 04 11:30 PM -- 01:30 AM (PST) @ Area 1 + 2

One of the core problems of modern statistics and machine learning is to approximate difficult-to-compute probability distributions. This problem is especially important in probabilistic modeling, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this tutorial we review and discuss variational inference (VI), a method a that approximates probability distributions through optimization. VI has been used in myriad applications in machine learning and tends to be faster than more traditional methods, such as Markov chain Monte Carlo sampling. Brought into machine learning in the 1990s, recent advances and easier implementation have renewed interest and application of this class of methods. This tutorial aims to provide both an introduction to VI with a modern view of the field, and an overview of the role that probabilistic inference plays in many of the central areas of machine learning.

The tutorial has three parts. First, we provide a broad review of variational inference from several perspectives. This part serves as an introduction (or review) of its central concepts. Second, we develop and connect some of the pivotal tools for VI that have been developed in the last few years, tools like Monte Carlo gradient estimation, black box variational inference, stochastic approximation, and variational auto-encoders. These methods have lead to a resurgence of research and applications of VI. Finally, we discuss some of the unsolved problems in VI and point to promising research directions.

Learning objectives;

  • Gain a well-grounded understanding of modern advances in variational inference.
  • Understand how to implement basic versions for a wide class of models.
  • Understand connections and different names used in other related research areas.
  • Understand important problems in variational inference research.

Target audience;

  • Machine learning researchers across all level of experience from first year grad students to other more experienced researchers
  • Targeted at those who want to understand recent advances in variational inference
  • Basic understanding of probability is sufficient

Author Information

David Blei (Columbia University)

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.

Shakir Mohamed (DeepMind)
Shakir Mohamed

Shakir Mohamed is a senior staff scientist at DeepMind in London. Shakir's main interests lie at the intersection of approximate Bayesian inference, deep learning and reinforcement learning, and the role that machine learning systems at this intersection have in the development of more intelligent and general-purpose learning systems. Before moving to London, Shakir held a Junior Research Fellowship from the Canadian Institute for Advanced Research (CIFAR), based in Vancouver at the University of British Columbia with Nando de Freitas. Shakir completed his PhD with Zoubin Ghahramani at the University of Cambridge, where he was a Commonwealth Scholar to the United Kingdom. Shakir is from South Africa and completed his previous degrees in Electrical and Information Engineering at the University of the Witwatersrand, Johannesburg.

Rajesh Ranganath (Princeton University)

Rajesh Ranganath is a PhD candidate in computer science at Princeton University. His research interests include approximate inference, model checking, Bayesian nonparametrics, and machine learning for healthcare. Rajesh has made several advances in variational methods, especially in popularising black-box variational inference methods that automate the process of inference by making variational inference easier to use while providing more scalable, and accurate posterior approximations. Rajesh works in SLAP group with David Blei. Before starting his PhD, Rajesh worked as a software engineer for AMA Capital Management. He obtained his BS and MS from Stanford University with Andrew Ng and Dan Jurafsky. Rajesh has won several awards and fellowships including the NDSEG graduate fellowship and the Porter Ogden Jacobus Fellowship, given to the top four doctoral students at Princeton University.

More from the Same Authors