Timezone: »
Machine learning researchers often express complex models as a program, relying on program transformations to add functionality. New languages and transformations (e.g., TorchScript and TensorFlow AutoGraph) are becoming core capabilities of ML libraries. However, existing transformations, such as automatic differentiation (AD), inference in probabilistic programming languages (PPL), and optimizing compilers are often built in isolation, and limited in scope. This workshop aims at viewing program transformations in ML in a unified light, making these capabilities more accessible, and building entirely new ones.
Program transformations are an area of active study. AD transforms a program performing numerical computation into one computing the gradient of those computations. In PPL, a program describing a sampling procedure can be modified to perform inference on model parameters given observations. Other examples are vectorizing a program expressed on one data point, and learned transformations where ML models use programs as inputs or outputs.
This workshop will bring together researchers in the fields of AD, programming languages, compilers, and ML, with the goal of understanding the commonalities between disparate approaches and views, and sharing ways to make these techniques broadly available. It would enable ML practitioners to iterate faster on novel models and architectures (e.g., those naturally expressed through high-level constructs like recursion).
Topics:
—Abstractions and syntax (beyond meta-programming and operator overloading) to naturally express a program (expression, or procedure) as an object to be manipulated.
—Techniques from AD and PPL the ML community could adopt to enable research on new models
—How to overcome challenges due to the ML’s specific hardware (GPUs, specialized chips) and software (Python) stacks, and the particular demands of practitioners for their tools
—Greater collaboration between ML and programming languages communities
Sat 8:30 a.m. - 8:40 a.m.
|
Opening statements
(
Introduction
)
|
🔗 |
Sat 8:40 a.m. - 9:30 a.m.
|
Jan-Willem van de Meent - Compositional Methods for Learning and Inference in Deep Probabilistic Programs
(
Talk
)
Deep learning and probabilistic programming are domains that have a lot in common in certain respects; both rely on software abstractions to enable iterative model development. In this talk we discuss how we can integrate techniques from both domains in problems where we would like to use priors to induce structured representations. To do so, we employ reweighted wake-sleep methods, which combine importance sampling methods (which have been operationalized in probabilistic programming) with variational methods for learning proposals. To enable a more iterative design of these methods, we introduce compositional constructs, which we refer to as combinators, which serve to define both model structure and evaluation strategies that correspond to different importance sampling schemes. Together these constructs define a path towards a more compositional design of variational methods that are correct by construction. |
Jan-Willem van de Meent 🔗 |
Sat 9:30 a.m. - 9:50 a.m.
|
Applications of a disintegration transformation
(
Talk
)
|
Praveen Narayanan 🔗 |
Sat 9:50 a.m. - 10:30 a.m.
|
Coffee break
|
🔗 |
Sat 10:30 a.m. - 11:20 a.m.
|
Christine Tasson - Semantics of Functional Probabilistic Programs
(
Talk
)
Probabilities are extensively used in Computer Science. Algorithms use probabilistic choices for improving efficiency or even for tackling problems that are unsolvable with deterministic computing. Recently, (Functional) Probabilistic Programming has been introduced for applications in Machine Learning and Artificial Intelligence. Probabilistic programs are used to describe statistical models and for developing probabilistic data analysis. In Probabilistic Programming Languages, inference algorithms are often delegated to compilers including optimizations. This program transformations are error prone, yet they should not change the probabilistic models. Hence the need for formal methods to avoid bugs. Developing formal semantics for probabilistic computing is challenging but crucial in order to systematize the analysis and certification of probabilistic programs. In this talk, I will first introduce functional probabilistic programing and the related problems. Then, I will present recent works in semantics of probabilistic computing, based on approximation of programs according to their use of resources. |
Christine Tasson 🔗 |
Sat 11:20 a.m. - 11:40 a.m.
|
The Differentiable Curry
(
Talk
)
|
Dimitrios Vytiniotis 🔗 |
Sat 11:40 a.m. - 12:00 p.m.
|
Functional Tensors for Probabilistic Programming
(
Talk
)
|
Fritz Obermeyer 🔗 |
Sat 12:00 p.m. - 2:00 p.m.
|
Lunch break & Poster session
(
Poster Session
)
|
Breandan Considine · Michael Innes · Du Phan · Dougal Maclaurin · Robin Manhaeve · Alexey Radul · Shashi Gowda · Ekansh Sharma · Eli Sennesh · Maxim Kochurov · Gordon Plotkin · Thomas Wiecki · Navjot Kukreja · Chung-chieh Shan · Matthew Johnson · Dan Belov · Neeraj Pradhan · Wannes Meert · Angelika Kimmig · Luc De Raedt · Brian Patton · Matthew Hoffman · Rif A. Saurous · Daniel Roy · Eli Bingham · Martin Jankowiak · Colin Carroll · Junpeng Lao · Liam Paull · Martin Abadi · Angel Rojas Jimenez · JP Chen
|
Sat 2:00 p.m. - 2:50 p.m.
|
Optimized execution of PyTorch programs with TorchScript
(
Talk
)
|
Zachary DeVito 🔗 |
Sat 2:50 p.m. - 3:40 p.m.
|
Skye Wanderman-Milne - JAX: accelerated machine-learning research via composable function transformations in Python
(
Talk
)
JAX is a system for high-performance machine learning research. It offers the familiarity of Python+NumPy together with hardware acceleration, and it enables the definition and composition of user-wielded function transformations useful for machine learning programs. These transformations include automatic differentiation, automatic batching, end-to-end compilation (via XLA), parallelizing over multiple accelerators, and more. Composing these transformations is the key to JAX's power and simplicity. |
Skye Wanderman-Milne 🔗 |
Sat 3:40 p.m. - 4:20 p.m.
|
Coffee break
|
🔗 |
Sat 4:20 p.m. - 4:40 p.m.
|
Generalized Abs-Linear Learning
(
Talk
)
|
Andreas Griewank 🔗 |
Sat 4:40 p.m. - 5:00 p.m.
|
Towards Polyhedral Automatic Differentiation
(
Talk
)
|
Jan Hueckelheim 🔗 |
Sat 5:00 p.m. - 5:20 p.m.
|
Taylor-Mode Automatic Differentiation for Higher-Order Derivatives in JAX
(
Talk
)
|
Jesse Bettencourt 🔗 |
Sat 5:20 p.m. - 6:00 p.m.
|
Panel and general discussion
(
Panel Discussion
)
|
🔗 |
Author Information
Pascal Lamblin (Google)
Atilim Gunes Baydin (University of Oxford)
Alexander Wiltschko (Google Brain)
Bart van Merriënboer (Google)
Emily Fertig (Google Research)
Barak Pearlmutter (Maynooth University)
David Duvenaud (University of Toronto)
David Duvenaud is an assistant professor in computer science at the University of Toronto. His research focuses on continuous-time models, latent-variable models, and deep learning. His postdoc was done at Harvard University, and his Ph.D. at the University of Cambridge. David also co-founded Invenia, an energy forecasting and trading company.
Laurent Hascoet (INRIA)
More from the Same Authors
-
2021 Spotlight: PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · Pierre-Antoine Manzagol · Daniel Tarlow · Subhodeep Moitra -
2021 : Learning the solar latent space: sigma-variational autoencoders for multiple channel solar imaging »
Edward Brown · Christopher Bridges · Bernard Benson · Atilim Gunes Baydin -
2021 : Simultaneous Multivariate Forecast of Space Weather Indices using Deep Neural Network Ensembles »
Bernard Benson · Christopher Bridges · Atilim Gunes Baydin -
2021 : Model-embedding flows: Combining the inductive biases of model-free deep learning and explicit probabilistic modeling »
Gianluigi Silvestri · Emily Fertig · Dave Moore · Luca Ambrogioni -
2021 : Dropout and Ensemble Networks for Thermospheric Density Uncertainty Estimation »
Stefano Bonasera · Giacomo Acciarini · Jorge Pérez-Hernández · Bernard Benson · Edward Brown · Eric Sutton · Moriba Jah · Christopher Bridges · Atilim Gunes Baydin -
2022 : Inferring molecular complexity from mass spectrometry data using machine learning »
Timothy Gebhard · Aaron C. Bell · Jian Gong · Jaden J. A. Hastings · George Fricke · Nathalie Cabrol · Scott Sandford · Michael Phillips · Kimberley Warren-Rhodes · Atilim Gunes Baydin -
2022 Workshop: The Symbiosis of Deep Learning and Differential Equations II »
Michael Poli · Winnie Xu · Estefany Kelly Buchanan · Maryam Hosseini · Luca Celotti · Martin Magill · Ermal Rrapaj · Qiyao Wei · Stefano Massaroli · Patrick Kidger · Archis Joglekar · Animesh Garg · David Duvenaud -
2022 Workshop: Machine Learning and the Physical Sciences »
Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Gilles Louppe · Siddharth Mishra-Sharma · Benjamin Nachman · Brian Nord · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Lenka Zdeborová · Rianne van den Berg -
2021 : Dependent Types for Machine Learning in Dex - David Duvenaud - University of Toronto »
David Duvenaud · AIPLANS 2021 -
2021 : Session 3 | Contributed talk: Maximilian Dax, "Amortized Bayesian inference of gravitational waves with normalizing flows" »
Maximilian Dax · Atilim Gunes Baydin -
2021 : Session 3 | Invited talk: Laure Zanna, "The future of climate modeling in the age of machine learning" »
Laure Zanna · Atilim Gunes Baydin -
2021 : Session 3 | Invited talk: Surya Ganguli, "From the geometry of high dimensional energy landscapes to optimal annealing in a dissipative many body quantum optimizer" »
Surya Ganguli · Atilim Gunes Baydin -
2021 : Session 2 | Contributed talk: George Stein, "Self-supervised similarity search for large scientific datasets" »
George Stein · Atilim Gunes Baydin -
2021 : Session 2 | Invited talk: Megan Ansdell, "NASA's efforts & opportunities to support ML in the Physical Sciences" »
Megan Ansdell · Atilim Gunes Baydin -
2021 : Session 1 | Contributed talk: Tian Xie, "Crystal Diffusion Variational Autoencoder for Periodic Material Generation" »
Tian Xie · Atilim Gunes Baydin -
2021 : Session 1 | Invited talk: Bingqing Cheng, "Predicting material properties with the help of machine learning" »
Bingqing Cheng · Atilim Gunes Baydin -
2021 : Session 1 | Invited talk: Max Welling, "Accelerating simulations of nature, both classical and quantum, with equivariant deep learning" »
Max Welling · Atilim Gunes Baydin -
2021 Workshop: Machine Learning and the Physical Sciences »
Anima Anandkumar · Kyle Cranmer · Mr. Prabhat · Lenka Zdeborová · Atilim Gunes Baydin · Juan Carrasquilla · Emine Kucukbenli · Gilles Louppe · Benjamin Nachman · Brian Nord · Savannah Thais -
2021 Poster: Meta-learning to Improve Pre-training »
Aniruddh Raghu · Jonathan Lorraine · Simon Kornblith · Matthew McDermott · David Duvenaud -
2021 Poster: PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair »
Zimin Chen · Vincent J Hellendoorn · Pascal Lamblin · Petros Maniatis · Pierre-Antoine Manzagol · Daniel Tarlow · Subhodeep Moitra -
2021 Poster: Domain Invariant Representation Learning with Domain Density Transformations »
A. Tuan Nguyen · Toan Tran · Yarin Gal · Atilim Gunes Baydin -
2020 : Panel discussion 2 »
Danielle S Bassett · Yoshua Bengio · Cristina Savin · David Duvenaud · Anna Choromanska · Yanping Huang -
2020 : Invited Talk David Duvenaud »
David Duvenaud -
2020 : Session 3 | Invited talk: Laura Waller, "Physics-based Learning for Computational Microscopy" »
Laura Waller · Atilim Gunes Baydin -
2020 : Session 2 | Invited talk: Phiala Shanahan, "Generative Flow Models for Gauge Field Theory" »
Phiala Shanahan · Atilim Gunes Baydin -
2020 : Session 2 | Invited talk: Estelle Inack, "Variational Neural Annealing" »
Estelle Inack · Atilim Gunes Baydin -
2020 : Session 1 | Invited talk: Michael Bronstein, "Geometric Deep Learning for Functional Protein Design" »
Michael Bronstein · Atilim Gunes Baydin -
2020 : Session 1 | Invited talk: Lauren Anderson, "3D Milky Way Dust Map using a Scalable Gaussian Process" »
Lauren Anderson · Atilim Gunes Baydin -
2020 Workshop: Machine Learning and the Physical Sciences »
Anima Anandkumar · Kyle Cranmer · Shirley Ho · Mr. Prabhat · Lenka Zdeborová · Atilim Gunes Baydin · Juan Carrasquilla · Adji Bousso Dieng · Karthik Kashinath · Gilles Louppe · Brian Nord · Michela Paganini · Savannah Thais -
2020 Tutorial: (Track3) Deep Implicit Layers: Neural ODEs, Equilibrium Models, and Differentiable Optimization Q&A »
David Duvenaud · J. Zico Kolter · Matthew Johnson -
2020 Poster: Evaluating Attribution for Graph Neural Networks »
Benjamin Sanchez-Lengeling · Jennifer Wei · Brian Lee · Emily Reif · Peter Wang · Wesley Qian · Kevin McCloskey · Lucy Colwell · Alexander Wiltschko -
2020 Poster: Black-Box Optimization with Local Generative Surrogates »
Sergey Shirobokov · Vladislav Belavin · Michael Kagan · Andrei Ustyuzhanin · Atilim Gunes Baydin -
2020 Poster: What went wrong and when? Instance-wise feature importance for time-series black-box models »
Sana Tonekaboni · Shalmali Joshi · Kieran Campbell · David Duvenaud · Anna Goldenberg -
2020 Poster: Learning Differential Equations that are Easy to Solve »
Jacob Kelly · Jesse Bettencourt · Matthew Johnson · David Duvenaud -
2020 Tutorial: (Track3) Deep Implicit Layers: Neural ODEs, Equilibrium Models, and Differentiable Optimization »
David Duvenaud · J. Zico Kolter · Matthew Johnson -
2019 : Opening Remarks »
Atilim Gunes Baydin · Juan Carrasquilla · Shirley Ho · Karthik Kashinath · Michela Paganini · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Roger Melko · Mr. Prabhat · Frank Wood -
2019 Workshop: Machine Learning and the Physical Sciences »
Atilim Gunes Baydin · Juan Carrasquilla · Shirley Ho · Karthik Kashinath · Michela Paganini · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Roger Melko · Mr. Prabhat · Frank Wood -
2019 : Phenotype »
Nir HaCohen · David Reshef · Matthew Johnson · Sam Morris · Aurel Nagy · Gokcen Eraslan · Meromit Singer · Eliezer Van Allen · Smita Krishnaswamy · Casey Greene · Scott Linderman · Alexander Wiltschko · Dylan Kotliar · James Zou · Brendan Bulik-Sullivan -
2019 : Molecules and Genomes »
David Haussler · Djork-Arné Clevert · Michael Keiser · Alan Aspuru-Guzik · David Duvenaud · David Jones · Jennifer Wei · Alexander D'Amour -
2019 Workshop: Learning Meaningful Representations of Life »
Elizabeth Wood · Yakir Reshef · Jonathan Bloom · Jasper Snoek · Barbara Engelhardt · Scott Linderman · Suchi Saria · Alexander Wiltschko · Casey Greene · Chang Liu · Kresten Lindorff-Larsen · Debora Marks -
2019 Poster: Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift »
Jasper Snoek · Yaniv Ovadia · Emily Fertig · Balaji Lakshminarayanan · Sebastian Nowozin · D. Sculley · Joshua Dillon · Jie Ren · Zachary Nado -
2019 Poster: Likelihood Ratios for Out-of-Distribution Detection »
Jie Ren · Peter Liu · Emily Fertig · Jasper Snoek · Ryan Poplin · Mark Depristo · Joshua Dillon · Balaji Lakshminarayanan -
2019 Poster: Latent Ordinary Differential Equations for Irregularly-Sampled Time Series »
Yulia Rubanova · Tian Qi Chen · David Duvenaud -
2019 Poster: Efficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model »
Atilim Gunes Baydin · Lei Shao · Wahid Bhimji · Lukas Heinrich · Saeid Naderiparizi · Andreas Munk · Jialin Liu · Bradley Gram-Hansen · Gilles Louppe · Lawrence Meadows · Philip Torr · Victor Lee · Kyle Cranmer · Mr. Prabhat · Frank Wood -
2019 Poster: Residual Flows for Invertible Generative Modeling »
Tian Qi Chen · Jens Behrmann · David Duvenaud · Joern-Henrik Jacobsen -
2019 Spotlight: Residual Flows for Invertible Generative Modeling »
Tian Qi Chen · Jens Behrmann · David Duvenaud · Joern-Henrik Jacobsen -
2019 Poster: Efficient Graph Generation with Graph Recurrent Attention Networks »
Renjie Liao · Yujia Li · Yang Song · Shenlong Wang · Will Hamilton · David Duvenaud · Raquel Urtasun · Richard Zemel -
2019 Poster: Neural Networks with Cheap Differential Operators »
Tian Qi Chen · David Duvenaud -
2019 Spotlight: Neural Networks with Cheap Differential Operators »
Tian Qi Chen · David Duvenaud -
2018 : Software Panel »
Ben Letham · David Duvenaud · Dustin Tran · Aki Vehtari -
2018 Poster: Isolating Sources of Disentanglement in Variational Autoencoders »
Tian Qi Chen · Xuechen (Chen) Li · Roger Grosse · David Duvenaud -
2018 Oral: Isolating Sources of Disentanglement in Variational Autoencoders »
Tian Qi Chen · Xuechen (Chen) Li · Roger Grosse · David Duvenaud -
2018 Poster: Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming »
Bart van Merriënboer · Dan Moldovan · Alexander Wiltschko -
2018 Poster: Neural Ordinary Differential Equations »
Tian Qi Chen · Yulia Rubanova · Jesse Bettencourt · David Duvenaud -
2018 Oral: Neural Ordinary Differential Equations »
Tian Qi Chen · Yulia Rubanova · Jesse Bettencourt · David Duvenaud -
2017 : Panel discussion »
Atilim Gunes Baydin · Adam Paszke · Jonathan Hüser · Jean Utke · Laurent Hascoet · Jeffrey Siskind · Jan Hueckelheim · Andreas Griewank -
2017 : Some highlights on Source-to-Source Adjoint AD »
Laurent Hascoet -
2017 : Beyond backprop: automatic differentiation in machine learning »
Atilim Gunes Baydin -
2017 Workshop: Aligned Artificial Intelligence »
Dylan Hadfield-Menell · Jacob Steinhardt · David Duvenaud · David Krueger · Anca Dragan -
2017 : Automatic Chemical Design Using a Data-driven Continuous Representation of Molecules »
David Duvenaud -
2017 Workshop: Deep Learning for Physical Sciences »
Atilim Gunes Baydin · Mr. Prabhat · Kyle Cranmer · Frank Wood -
2017 Poster: Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference »
Geoffrey Roeder · Yuhuai Wu · David Duvenaud -
2016 : Generating Class-conditional Images with Gradient-based Inference »
David Duvenaud -
2016 : David Duvenaud – No more mini-languages: The power of autodiffing full-featured Python »
David Duvenaud -
2016 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang -
2016 Poster: Composing graphical models with neural networks for structured representations and fast inference »
Matthew Johnson · David Duvenaud · Alex Wiltschko · Ryan Adams · Sandeep R Datta -
2016 Poster: Probing the Compositionality of Intuitive Functions »
Eric Schulz · Josh Tenenbaum · David Duvenaud · Maarten Speekenbrink · Samuel J Gershman -
2015 : *David Duvenaud* Automatic Differentiation: The most criminally underused tool in probabilistic numerics »
David Duvenaud -
2015 Poster: Convolutional Networks on Graphs for Learning Molecular Fingerprints »
David Duvenaud · Dougal Maclaurin · Jorge Iparraguirre · Rafael Bombarell · Timothy Hirzel · Alan Aspuru-Guzik · Ryan Adams -
2014 Poster: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2014 Oral: Probabilistic ODE Solvers with Runge-Kutta Means »
Michael Schober · David Duvenaud · Philipp Hennig -
2012 Poster: Active Learning of Model Evidence Using Bayesian Quadrature »
Michael A Osborne · David Duvenaud · Roman Garnett · Carl Edward Rasmussen · Stephen J Roberts · Zoubin Ghahramani -
2011 Poster: Additive Gaussian Processes »
David Duvenaud · Hannes Nickisch · Carl Edward Rasmussen