Timezone: »

Bayesian Deep Learning
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling

Fri Dec 07 05:00 AM -- 03:30 PM (PST) @ Room 220 D
Event URL: http://bayesiandeeplearning.org »

While deep learning has been revolutionary for machine learning, most modern deep learning models cannot represent their uncertainty nor take advantage of the well studied tools of probability theory. This has started to change following recent developments of tools and techniques combining Bayesian approaches with deep learning. The intersection of the two fields has received great interest from the community over the past few years, with the introduction of new deep learning models that take advantage of Bayesian techniques, as well as Bayesian models that incorporate deep learning elements [1-11]. In fact, the use of Bayesian techniques in deep learning can be traced back to the 1990s’, in seminal works by Radford Neal [12], David MacKay [13], and Dayan et al. [14]. These gave us tools to reason about deep models’ confidence, and achieved state-of-the-art performance on many tasks. However earlier tools did not adapt when new needs arose (such as scalability to big data), and were consequently forgotten. Such ideas are now being revisited in light of new advances in the field, yielding many exciting new results.

Extending on the workshop’s success from the past couple of years, this workshop will again study the advantages and disadvantages of the ideas above, and will be a platform to host the recent flourish of ideas using Bayesian approaches in deep learning and using deep learning tools in Bayesian modelling. The program includes a mix of invited talks, contributed talks, and contributed posters. The main theme this year will be applications of Bayesian deep learning in the real world, highlighting the requirements of practitioners from the research community. Future directions for the field will be debated in a panel discussion.

The BDL workshop was the second largest workshop at NIPS over the past couple of years, with last year’s workshop seeing an almost 100% increase in the number of submissions (75 submissions in total), attracting sponsorship from Google, Microsoft Ventures, Uber, and Qualcomm in the form of student travel awards.

Probabilistic deep models for classification and regression (such as extensions and application of Bayesian neural networks),
Generative deep models (such as variational autoencoders),
Incorporating explicit prior knowledge in deep learning (such as posterior regularization with logic rules),
Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
Scalable MCMC inference in Bayesian deep models,
Deep recognition models for variational inference (amortized inference),
Model uncertainty in deep learning,
Bayesian deep reinforcement learning,
Deep learning with small data,
Deep learning in Bayesian modelling,
Probabilistic semi-supervised learning techniques,
Active learning and Bayesian optimization for experimental design,
Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general,
Implicit inference,
Kernel methods in Bayesian deep learning.

Call for papers:
A submission should take the form of an extended abstract (3 pages long) in PDF format using the NIPS style. Author names do not need to be anonymized and references (as well as appendices) may extend as far as needed beyond the 3 page upper limit. If research has previously appeared in a journal, workshop, or conference (including NIPS 2017 conference), the workshop submission should extend that previous work.
Submissions will be accepted as contributed talks or poster presentations.

Related previous workshops:
Bayesian Deep Learning (NIPS 2017)
Principled Approaches to Deep Learning (ICML 2017)
Bayesian Deep Learning (NIPS 2016)
Data-Efficient Machine Learning (ICML 2016)
Deep Learning Workshop (ICML 2015, 2016)
Deep Learning Symposium (NIPS 2015 symposium)
Advances in Approximate Bayesian Inference (NIPS 2015)
Black box learning and inference (NIPS 2015)
Deep Reinforcement Learning (NIPS 2015)
Deep Learning and Representation Learning (NIPS 2014)
Advances in Variational Inference (NIPS 2014)

Fri 5:00 a.m. - 5:05 a.m.

Introductory comments by the organisers.

Yarin Gal
Fri 5:05 a.m. - 5:25 a.m.
TBC 1 (Invited Talk)
Frank Wood
Fri 5:25 a.m. - 5:45 a.m.
TBC 2 (Invited Talk)
Dmitry Vetrov
Fri 5:45 a.m. - 6:00 a.m.
TBC 3 (Contributed Talk)
Fri 6:00 a.m. - 6:20 a.m.
TBC 4 (Invited Talk)
Debora Marks
Fri 6:20 a.m. - 6:40 a.m.
TBC 5 (Invited Talk)
Harri Valpola
Fri 6:40 a.m. - 6:55 a.m.
Poster Spotlights
Fri 6:55 a.m. - 7:55 a.m.
Poster Session 1 (Poster Session)
Stefan Gadatsch, Danil Kuzin, Navneet Kumar, Patrick Dallaire, Tom Ryder, Remus-Petru Pop, Nathan Hunt, Adam Kortylewski, Sophie Burkhardt, Mahmoud Elnaggar, Dieterich Lawson, Yifeng Li, J. Jon Ryu, Juhan Bae, Micha Livne, Tim Pearce, Mariia Vladimirova, Jason E. Ramapuram, Jiaming Zeng, Xinyu Hu, Eric Jiawei He, Danielle Maddix, Arunesh Mittal, Albert Shaw, Tuan Anh Le, Alexander Sagel, Lisha Chen, Victor Gallego, Mahdi Karami, Zihao Zhang, Tal Kachman, Noah Weber, Matt Benatan, Kumar K Sricharan, Vincent Cartillier, Ivan Ovinnikov, Buu Phan, Mahmoud Hossam, Liu Ziyin, Valery Kharitonov, Eugene Golikov, Qiang Zhang, JaeMyung Kim, Sebastian Farquhar, Jishnu Mukhoti, Xu Hu, Gregory Gundersen, lavanya Tekumalla, Paris Perdikaris, Ershad Banijamali, Siddhartha Jain, Ge Liu, Martin Gottwald, Katy Blumer, Sukmin Yun, Ranganath Krishnan, Roman Novak, Yilun Du, Yu Gong, Beliz Gokkaya, Jessica Ai, Daniel Duckworth, Johannes von Oswald, Christian Henning, LP Morency, Ali Ghodsi, Mahesh Subedar, Jean-Pascal Pfister, Rémi Lebret, Chao Ma, Aleksander Wieczorek, Laurence Perreault Levasseur
Fri 7:55 a.m. - 8:15 a.m.
TBC 6 (Invited Talk)
Christian Leibig
Fri 8:15 a.m. - 8:30 a.m.
TBC 7 (Contributed Talk)
Fri 8:30 a.m. - 8:50 a.m.
TBC 8 (Invited Talk)
Balaji Lakshminarayanan
Fri 8:50 a.m. - 10:20 a.m.
Fri 10:20 a.m. - 10:40 a.m.
TBC 9 (Invited Talk)
Sergey Levine
Fri 10:40 a.m. - 10:55 a.m.
TBC 10 (Contributed Talk)
Fri 10:55 a.m. - 11:10 a.m.
TBC 11 (Invited Talk)
Yashar Hezaveh
Fri 11:10 a.m. - 11:30 a.m.
TBC 12 (Invited Talk)
Tim Genewein
Fri 11:30 a.m. - 12:30 p.m.
Poster Session 2 (Poster Session)
Fri 12:30 p.m. - 12:50 p.m.
TBC 13 (Invited Talk)
David Sontag
Fri 12:50 p.m. - 1:05 p.m.
TBC 14 (Contributed Talk)
Fri 1:05 p.m. - 1:25 p.m.
TBC 15 (Invited Talk)
Yarin Gal
Fri 1:30 p.m. - 2:30 p.m.
Panel Session
Fri 2:30 p.m. - 4:00 p.m.
Poster Session 3 (Poster Session)

Author Information

Yarin Gal (University of Oxford)
Jose Miguel Hernández-Lobato (University of Cambridge)
Christos Louizos (University of Amsterdam)
Andrew Wilson (Cornell University)
Zoubin Ghahramani (Uber and University of Cambridge)

Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, where he leads the Machine Learning Group. He studied computer science and cognitive science at the University of Pennsylvania, obtained his PhD from MIT in 1995, and was a postdoctoral fellow at the University of Toronto. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research interests include statistical machine learning, Bayesian nonparametrics, scalable inference, probabilistic programming, and building an automatic statistician. He has held a number of leadership roles as programme and general chair of the leading international conferences in machine learning including: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014). In 2015 he was elected a Fellow of the Royal Society.

Kevin Murphy (Google)
Max Welling (University of Amsterdam / Qualcomm AI Research)

More from the Same Authors