Timezone: »
Approximating a probability density in a tractable manner is a central task in Bayesian statistics. Variational Inference (VI) is a popular technique that achieves tractability by choosing a relatively simple variational approximation. Borrowing ideas from the classic boosting framework, recent approaches attempt to \emph{boost} VI by replacing the selection of a single density with an iteratively constructed mixture of densities. In order to guarantee convergence, previous works impose stringent assumptions that require significant effort for practitioners. Specifically, they require a custom implementation of the greedy step (called the LMO) for every probabilistic model with respect to an unnatural variational family of truncated distributions. Our work fixes these issues with novel theoretical and algorithmic insights. On the theoretical side, we show that boosting VI satisfies a relaxed smoothness assumption which is sufficient for the convergence of the functional Frank-Wolfe (FW) algorithm. Furthermore, we rephrase the LMO problem and propose to maximize the Residual ELBO (RELBO) which replaces the standard ELBO optimization in VI. These theoretical enhancements allow for black box implementation of the boosting subroutine. Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.
Author Information
Francesco Locatello (MPI Tübingen - ETH Zürich)
Gideon Dresdner (ETH Zürich)
Rajiv Khanna (University of Texas at Austin)
Isabel Valera (Max Planck Institute for Intelligent Systems)
Gunnar Ratsch (ETHZ)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Spotlight: Boosting Black Box Variational Inference »
Thu. Dec 6th 02:50 -- 02:55 PM Room Room 220 E
More from the Same Authors
-
2020 : Contributed Talk 3: Algorithmic Recourse: from Counterfactual Explanations to Interventions »
Amir-Hossein Karimi · Bernhard Schölkopf · Isabel Valera -
2020 Workshop: I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning »
Jessica Forde · Francisco Ruiz · Melanie Fernandez Pradier · Aaron Schein · Finale Doshi-Velez · Isabel Valera · David Blei · Hanna Wallach -
2020 Poster: Object-Centric Learning with Slot Attention »
Francesco Locatello · Dirk Weissenborn · Thomas Unterthiner · Aravindh Mahendran · Georg Heigold · Jakob Uszkoreit · Alexey Dosovitskiy · Thomas Kipf -
2020 Spotlight: Object-Centric Learning with Slot Attention »
Francesco Locatello · Dirk Weissenborn · Thomas Unterthiner · Aravindh Mahendran · Georg Heigold · Jakob Uszkoreit · Alexey Dosovitskiy · Thomas Kipf -
2020 Poster: Algorithmic recourse under imperfect causal knowledge: a probabilistic approach »
Amir-Hossein Karimi · Julius von Kügelgen · Bernhard Schölkopf · Isabel Valera -
2020 Spotlight: Algorithmic recourse under imperfect causal knowledge: a probabilistic approach »
Amir-Hossein Karimi · Julius von Kügelgen · Bernhard Schölkopf · Isabel Valera -
2019 Workshop: Learning with Temporal Point Processes »
Manuel Rodriguez · Le Song · Isabel Valera · Yan Liu · Abir De · Hongyuan Zha -
2019 Workshop: Workshop on Human-Centric Machine Learning »
Plamen P Angelov · Nuria Oliver · Adrian Weller · Manuel Rodriguez · Isabel Valera · Silvia Chiappa · Hoda Heidari · Niki Kilbertus -
2019 Poster: Are Disentangled Representations Helpful for Abstract Visual Reasoning? »
Sjoerd van Steenkiste · Francesco Locatello · Jürgen Schmidhuber · Olivier Bachem -
2019 Poster: On the Fairness of Disentangled Representations »
Francesco Locatello · Gabriele Abbati · Thomas Rainforth · Stefan Bauer · Bernhard Schölkopf · Olivier Bachem -
2019 Poster: On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset »
Muhammad Waleed Gondal · Manuel Wuethrich · Djordje Miladinovic · Francesco Locatello · Martin Breidt · Valentin Volchkov · Joel Akpo · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Learning Sparse Distributions using Iterative Hard Thresholding »
Jacky Zhang · Rajiv Khanna · Anastasios Kyrillidis · Sanmi Koyejo -
2019 Poster: Stochastic Frank-Wolfe for Composite Convex Minimization »
Francesco Locatello · Alp Yurtsever · Olivier Fercoq · Volkan Cevher -
2018 Poster: Enhancing the Accuracy and Fairness of Human Decision Making »
Isabel Valera · Adish Singla · Manuel Gomez Rodriguez -
2017 : Poster Session »
Shunsuke Horii · Heejin Jeong · Tobias Schwedes · Qing He · Ben Calderhead · Ertunc Erdil · Jaan Altosaar · Patrick Muchmore · Rajiv Khanna · Ian Gemp · Pengfei Zhang · Yuan Zhou · Chris Cremer · Maria DeYoreo · Alexander Terenin · Brendan McVeigh · Rachit Singh · Yaodong Yang · Erik Bodin · Trefor Evans · Henry Chai · Shandian Zhe · Jeffrey Ling · Vincent ADAM · Lars Maaløe · Andrew Miller · Ari Pakman · Josip Djolonga · Hong Ge -
2017 : Poster Spotlights »
Francesco Locatello · Ari Pakman · Da Tang · Thomas Rainforth · Zalan Borsos · Marko Järvenpää · Eric Nalisnick · Gabriele Abbati · XIAOYU LU · Jonathan Huggins · Rachit Singh · Rui Luo -
2017 Poster: Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees »
Francesco Locatello · Michael Tschannen · Gunnar Ratsch · Martin Jaggi -
2016 Oral: Examples are not enough, learn to criticize! Criticism for Interpretability »
Been Kim · Sanmi Koyejo · Rajiv Khanna -
2016 Poster: Examples are not enough, learn to criticize! Criticism for Interpretability »
Been Kim · Sanmi Koyejo · Rajiv Khanna -
2014 Poster: On Prior Distributions and Approximate Inference for Structured Variables »
Sanmi Koyejo · Rajiv Khanna · Joydeep Ghosh · Russell Poldrack