Timezone: »
The Bayesian posterior minimizes the "inferential risk" which itself bounds the "predictive risk." This bound is tight when the likelihood and prior are well-specified. However since misspecification induces a gap, the Bayesian posterior predictive distribution may have poor generalization performance. This work develops a multi-sample loss (PAC^m) which can close the gap by spanning a trade-off between the two risks. The loss is computationally favorable and offers PAC generalization guarantees. Empirical study demonstrates improvement to the predictive distribution.
Author Information
Joshua Dillon (Google Research)
Warren Morningstar (Google)
I am an AI resident at google, studying how to model uncertainty in Neural Networks. Before Google, I was an astrophysicist at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University, working on statistical modeling and machine learning applied to astronomical observations.
Alexander Alemi (Google)
More from the Same Authors
-
2021 : What Do We Mean by Generalization in Federated Learning? »
Honglin Yuan · Warren Morningstar · Lin Ning -
2022 : Trajectory ensembling for fine tuning - performance gains without modifying training »
Louise Anderson-Conway · Vighnesh Birodkar · Saurabh Singh · Hossein Mobahi · Alexander Alemi -
2021 : PAC^m-Bayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime »
Alexander Alemi -
2021 Poster: Does Knowledge Distillation Really Work? »
Samuel Stanton · Pavel Izmailov · Polina Kirichenko · Alexander Alemi · Andrew Wilson -
2020 Workshop: Deep Learning through Information Geometry »
Pratik Chaudhari · Alexander Alemi · Varun Jog · Dhagash Mehta · Frank Nielsen · Stefano Soatto · Greg Ver Steeg -
2019 : Poster Session »
Gergely Flamich · Shashanka Ubaru · Charles Zheng · Josip Djolonga · Kristoffer Wickstrøm · Diego Granziol · Konstantinos Pitas · Jun Li · Robert Williamson · Sangwoong Yoon · Kwot Sin Lee · Julian Zilly · Linda Petrini · Ian Fischer · Zhe Dong · Alexander Alemi · Bao-Ngoc Nguyen · Rob Brekelmans · Tailin Wu · Aditya Mahajan · Alexander Li · Kirankumar Shiragur · Yair Carmon · Linara Adilova · SHIYU LIU · Bang An · Sanjeeb Dash · Oktay Gunluk · Arya Mazumdar · Mehul Motani · Julia Rosenzweig · Michael Kamp · Marton Havasi · Leighton P Barnes · Zhengqing Zhou · Yi Hao · Dylan Foster · Yuval Benjamini · Nati Srebro · Michael Tschannen · Paul Rubenstein · Sylvain Gelly · John Duchi · Aaron Sidford · Robin Ru · Stefan Zohren · Murtaza Dalal · Michael A Osborne · Stephen J Roberts · Moses Charikar · Jayakumar Subramanian · Xiaodi Fan · Max Schwarzer · Nicholas Roberts · Simon Lacoste-Julien · Vinay Prabhu · Aram Galstyan · Greg Ver Steeg · Lalitha Sankar · Yung-Kyun Noh · Gautam Dasarathy · Frank Park · Ngai-Man (Man) Cheung · Ngoc-Trung Tran · Linxiao Yang · Ben Poole · Andrea Censi · Tristan Sylvain · R Devon Hjelm · Bangjie Liu · Jose Gallego-Posada · Tyler Sypherd · Kai Yang · Jan Nikolas Morshuis -
2019 : Invited Talk: Alexander A Alemi »
Alexander Alemi -
2019 Poster: Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift »
Jasper Snoek · Yaniv Ovadia · Emily Fertig · Balaji Lakshminarayanan · Sebastian Nowozin · D. Sculley · Joshua Dillon · Jie Ren · Zachary Nado -
2019 Poster: Likelihood Ratios for Out-of-Distribution Detection »
Jie Ren · Peter Liu · Emily Fertig · Jasper Snoek · Ryan Poplin · Mark Depristo · Joshua Dillon · Balaji Lakshminarayanan -
2018 Poster: Watch Your Step: Learning Node Embeddings via Graph Attention »
Sami Abu-El-Haija · Bryan Perozzi · Rami Al-Rfou · Alexander Alemi -
2018 Poster: GILBO: One Metric to Measure Them All »
Alexander Alemi · Ian Fischer -
2018 Spotlight: GILBO: One Metric to Measure Them All »
Alexander Alemi · Ian Fischer -
2016 Poster: DeepMath - Deep Sequence Models for Premise Selection »
Geoffrey Irving · Christian Szegedy · Alexander Alemi · Niklas Een · Francois Chollet · Josef Urban