Timezone: »

Black box learning and inference
Josh Tenenbaum · Jan-Willem van de Meent · Tejas Kulkarni · S. M. Ali Eslami · Brooks Paige · Frank Wood · Zoubin Ghahramani

Sat Dec 12 05:30 AM -- 03:30 PM (PST) @ 513 ef
Event URL: http://www.blackboxworkshop.org »

Probabilistic models have traditionally co-evolved with tailored algorithms for efficient learning and inference. One of the exciting developments of recent years has been the resurgence of black box methods, which make relatively few assumptions about the model structure, allowing application to broader model families.

In probabilistic programming systems, black box methods have greatly improved the capabilities of inference backends. Similarly, the design of connectionist models has been simplified by the development of black box frameworks for training arbitrary architectures. These innovations open up opportunities to design new classes of models that smoothly negotiate the transition from low-level features of the data to high-level structured representations that are interpretable and generalize well across examples.

This workshop brings together developers of black box inference technologies, probabilistic programming systems, and connectionist computing frameworks. The goal is to formulate a shared understanding of how black box methods can enable advances in the design of intelligent learning systems. Topics of discussion will include:

* Black box techniques for gradient ascent, variational inference, Markov chain- and sequential Monte Carlo.
* Implementation of black box techniques in probabilistic programming systems and computing frameworks for connectionist model families.
* Models that integrate top-down and bottom-up model representations to perform amortized inference: variational autoencoders, deep latent Gaussian models, restricted Boltzmann machines, neural network based proposals in MCMC.
* Applications to vision, speech, reinforcement learning, motor control, language learning.

Sat 11:30 a.m. - 11:50 a.m. [iCal]
Variational Auto-Encoders and Extensions (Talk)
Durk Kingma
Sat 11:50 a.m. - 12:15 p.m. [iCal]
Automatic Differentiation Variational Inference in Stan (Talk)
Alp Kucukelbir
Sat 12:15 p.m. - 12:35 p.m. [iCal]
Variational Gaussian Process (Talk)
Dustin Tran
Sat 12:35 p.m. - 12:55 p.m. [iCal]
Black Box Policy Search with Probabilistic Programs (Talk)
Jan-Willem van de Meent
Sat 1:30 p.m. - 2:15 p.m. [iCal]
Importance Weighted Autoencoders (Talk)
Russ Salakhutdinov

Author Information

Josh Tenenbaum (MIT)

Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).

Jan-Willem van de Meent (University of Oxford)
Tejas Kulkarni (MIT)
Ali Eslami (Google DeepMind)
Brooks Paige (University of Oxford)
Frank Wood (University of Oxford)

Dr. Wood is an associate professor in the Department of Engineering Science at the University of Oxford. Before that he was an assistant professor of Statistics at Columbia University and a research scientist at the Columbia Center for Computational Learning Systems. He formerly was a postdoctoral fellow of the Gatsby Computational Neuroscience Unit of the University College London. He holds a PhD from Brown University (’07) and BS from Cornell University (’96), both in computer science. Dr. Wood is the original architect of both the Anglican and Probabilistic-C probabilistic programming systems. He conducts AI-driven research at the boundary of probabilistic programming, Bayesian modeling, and Monte Carlo methods. Dr. Wood holds 6 patents, has authored over 50 papers, received the AISTATS best paper award in 2009, and has been awarded faculty research awards from Xerox, Google and Amazon. Prior to his academic career he was a successful entrepreneur having run and sold the content-based image retrieval company ToFish! to AOL/Time Warner and served as CEO of Interfolio.

Zoubin Ghahramani (University of Cambridge)

Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, where he leads the Machine Learning Group. He studied computer science and cognitive science at the University of Pennsylvania, obtained his PhD from MIT in 1995, and was a postdoctoral fellow at the University of Toronto. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research interests include statistical machine learning, Bayesian nonparametrics, scalable inference, probabilistic programming, and building an automatic statistician. He has held a number of leadership roles as programme and general chair of the leading international conferences in machine learning including: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014). In 2015 he was elected a Fellow of the Royal Society.

More from the Same Authors