Timezone: »

The Symbiosis of Deep Learning and Differential Equations II
Michael Poli · Winnie Xu · Estefany Kelly Buchanan · Maryam Hosseini · Luca Celotti · Martin Magill · Ermal Rrapaj · Qiyao Wei · Stefano Massaroli · Patrick Kidger · Archis Joglekar · Animesh Garg · David Duvenaud

Fri Dec 09 04:00 AM -- 10:55 AM (PST) @ Virtual
Event URL: https://dlde-2022.github.io/ »

In recent years, there has been a rapid increase of machine learning applications in computational sciences, with some of the most impressive results at the interface of deep learning (DL) and differential equations (DEs). DL techniques have been used in a variety of ways to dramatically enhance the effectiveness of DE solvers and computer simulations. These successes have widespread implications, as DEs are among the most well-understood tools for the mathematical analysis of scientific knowledge, and they are fundamental building blocks for mathematical models in engineering, finance, and the natural sciences. Conversely, DL algorithms based on DEs--such as neural differential equations and continuous-time diffusion models--have also been successfully employed as deep learning models. Moreover, theoretical tools from DE analysis have been used to glean insights into the expressivity and training dynamics of mainstream deep learning algorithms.

This workshop will aim to bring together researchers with backgrounds in computational science and deep learning to encourage intellectual exchanges, cultivate relationships and accelerate research in this area. The scope of the workshop spans topics at the intersection of DL and DEs, including theory of DL and DEs, neural differential equations, solving DEs with neural networks, and more.

Author Information

Michael Poli (Stanford University)
Winnie Xu (University of Toronto / Stanford University)
Estefany Kelly Buchanan (Columbia University)
Maryam Hosseini (Université de Sherbrooke)
Luca Celotti (Université de Sherbrooke)
Martin Magill (Borealis AI)

I am a PhD student in modelling and computational science under the supervision of Dr. Hendrick de Haan in the cNAB.LAB for computational nanobiophysics. Recently, I’ve been interested in using deep neural networks to solve the partial differential equations that describe electric fields and molecular transport through nanofluidic devices. I’ve also been using these mathematical problems as a controlled setting in which to study deep neural networks themselves.

Ermal Rrapaj (University of California Berkeley)
Qiyao Wei (University of Toronto)

a University of Toronto undergrad inspired by the advances and accomplishments of Artificial Intelligence. My short-term research interest is to make AI more about theory and justification, rather than network design and hyperparameter tuning. My long-term goal is to crack the code of human intelligence, or to achieve human-level intelligence in our machines.

Stefano Massaroli (The University of Tokyo)
Patrick Kidger (Google X)
Archis Joglekar (University of Michigan - Ann Arbor)
Animesh Garg (University of Toronto, Nvidia, Vector Institute)

I am a CIFAR AI Chair Assistant Professor of Computer Science at the University of Toronto, a Faculty Member at the Vector Institute, and Sr. Researcher at Nvidia. My current research focuses on machine learning for perception and control in robotics.

David Duvenaud (University of Toronto)

David Duvenaud is an assistant professor in computer science at the University of Toronto. His research focuses on continuous-time models, latent-variable models, and deep learning. His postdoc was done at Harvard University, and his Ph.D. at the University of Cambridge. David also co-founded Invenia, an energy forecasting and trading company.

More from the Same Authors