Workshop: Interpretable Inductive Biases and Physically Structured Learning

Michael Lutter, Alexander Terenin, Shirley Ho, Lei Wang

Sat, Dec 12th, 2020 @ 14:30 – 22:30 GMT
Abstract: Over the last decade, deep networks have propelled machine learning to accomplish tasks previously considered far out of reach, human-level performance in image classification and game-playing. However, research has also shown that the deep networks are often brittle to distributional shifts in data: it has been shown that human-imperceptible changes can lead to absurd predictions. In many application areas, including physics, robotics, social sciences and life sciences, this motivates the need for robustness and interpretability, so that deep networks can be trusted in practical applications. Interpretable and robust models can be constructed by incorporating prior knowledge within the model or learning process as an inductive bias, thereby regularizing the model, avoiding overfitting, and making the model easier to understand for scientists who are non-machine-learning experts. Already in the last few years researchers from different fields have proposed various combinations of domain knowledge and machine learning and successfully applied these techniques to various applications.

Video

Chat

Chat is not available.

Schedule

14:30 – 14:35 GMT
Introduction
14:35 – 14:50 GMT
Thomas Pierrot - Learning Compositional Neural Programs for Continuous Control
Thomas PIERROT
14:50 – 15:10 GMT
Jessica Hamrick - Structured Computation and Representation in Deep Reinforcement Learning
Jessica Hamrick
15:10 – 15:25 GMT
Manu Kalia - Deep learning of normal form autoencoders for universal, parameter-dependent dynamics
Manu Kalia
15:25 – 15:50 GMT
Rose Yu - Physics-Guided AI for Learning Spatiotemporal Dynamics
Rose Yu
15:50 – 16:05 GMT
Ferran Alet - Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
Ferran Alet
16:05 – 17:00 GMT
Poster Session 1
17:00 – 17:25 GMT
Frank Noé - PauliNet: Deep Neural Network Solution of the Electronic Schrödinger Equation
Frank Noe
17:25 – 17:40 GMT
Kimberly Stachenfeld - Graph Networks with Spectral Message Passing
Kim Stachenfeld
17:40 – 18:10 GMT
Franziska Meier - Inductive Biases for Models and Learning-to-Learn
Franziska Meier
18:10 – 18:25 GMT
Rui Wang - Shapley Explanation Networks
Rui Wang
18:25 – 18:55 GMT
Jeanette Bohg - One the Role of Hierarchies for Learning Manipulation Skills
Christin Jeannette Bohg
19:00 – 20:00 GMT
Panel Discussion
20:00 – 21:00 GMT
4 - Physics-informed Generative Adversarial Networks for Sequence Generation with Limited Data
Chacha Chen
20:00 – 21:00 GMT
25 - Complex Skill Acquisition through Simple Skill Imitation Learning
Pranay Pasula
20:00 – 21:00 GMT
12 - IV-Posterior: Inverse Value Estimation forInterpretable Policy Certificates
Tatiana López-Guevara
20:00 – 21:00 GMT
9 - Thermodynamic Consistent Neural Networks for Learning Material Interfacial Mechanics
Jiaxin Zhang
20:00 – 21:00 GMT
20 -SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency
Sameer Dharur
20:00 – 21:00 GMT
5 - On the Structure of Cyclic Linear Disentangled Representations
Matthew Painter
20:00 – 21:00 GMT
Poster Session 2
20:00 – 21:00 GMT
26 - Is the Surrogate Model Interpretable?
Sangwon Kim
20:00 – 21:00 GMT
6 - Interpretable Models for Granger Causality Using Self-explaining Neural Networks
Ričards Marcinkevičs
20:00 – 21:00 GMT
21 - Solving Physics Puzzles by Reasoning about Paths
Augustin Harter
20:00 – 21:00 GMT
19 - Choice of Representation Matters for Adversarial Robustness
Amartya Sanyal
20:00 – 21:00 GMT
17 - Uncovering How Neural Network Representations Vary with Width and Depth
Thao Nguyen
20:00 – 21:00 GMT
24 - Deep Context-Aware Novelty Detection
Ellen Rushe
20:00 – 21:00 GMT
14 - Learning Dynamical Systems Requires Rethinking Generalization
Rui Wang
20:00 – 21:00 GMT
10 - A Trainable Optimal Transport Embedding for Feature Aggregation
Grégoire Mialon
20:00 – 21:00 GMT
1 - Real-time Classification from Short Event-Camera Streams using Input-filtering Neural ODEs
Giorgio Giannone
20:00 – 21:00 GMT
11 - A novel approach for semiconductor etching process with inductive biases
Sanghoon Myung
20:00 – 21:00 GMT
23 - Constraining neural networks output by an interpolating loss function with region priors
Hannes Bergkvist
20:00 – 21:00 GMT
8 - Individuality in the hive - Learning to embed lifetime social behavior of honey bees
Benjamin Wild
20:00 – 21:00 GMT
3 - Improving the trustworthiness of image classification models by utilizing bounding-box annotations
Dharma R KC
20:00 – 21:00 GMT
15 - Lie Algebra Convolutional Networks with Automatic Symmetry Extraction
Nima Dehmamy
20:00 – 21:00 GMT
7 - A Symmetric and Object-Centric World Model for Stochastic Environments
Patrick Emami
20:00 – 21:00 GMT
2 - Relevance of Rotationally Equivariant Convolutions for Predicting Molecular Properties
Benjamin K Miller
20:00 – 21:00 GMT
13 - Gradient-based Optimization for Multi-resource Spatial Coverage
Nitin Kamra
20:00 – 21:00 GMT
12 - Physics-aware, data-driven discovery of slow and stable coarse-grained dynamics for high-dimensional multiscale systems
Sebastian Kaltenbach
20:00 – 21:00 GMT
22 - Modelling Advertising Awareness, an Interpretable and Differentiable Approach
Luz Blaz
20:00 – 21:00 GMT
18 - Simulating Surface Wave Dynamics with Convolutional Networks
20:00 – 21:00 GMT
16 - An Image is Worth 16 × 16 Tokens: Visual Priors for Efficient Image Synthesis with Transformers
Robin Rombach
21:00 – 21:15 GMT
Liwei Chen - Deep Learning Surrogates for Computational Fluid Dynamics
Nils Thuerey
21:15 – 22:15 GMT
Maziar Raissi - Hidden Physics Models
Maziar Raissi
22:15 – 22:30 GMT
Closing Remarks