Skip to yearly menu bar Skip to main content


Interpretable Inductive Biases and Physically Structured Learning

Michael Lutter · Alexander Terenin · Shirley Ho · Lei Wang

Sat 12 Dec, 6:30 a.m. PST

Over the last decade, deep networks have propelled machine learning to accomplish tasks previously considered far out of reach, human-level performance in image classification and game-playing. However, research has also shown that the deep networks are often brittle to distributional shifts in data: it has been shown that human-imperceptible changes can lead to absurd predictions. In many application areas, including physics, robotics, social sciences and life sciences, this motivates the need for robustness and interpretability, so that deep networks can be trusted in practical applications. Interpretable and robust models can be constructed by incorporating prior knowledge within the model or learning process as an inductive bias, thereby regularizing the model, avoiding overfitting, and making the model easier to understand for scientists who are non-machine-learning experts. Already in the last few years researchers from different fields have proposed various combinations of domain knowledge and machine learning and successfully applied these techniques to various applications.

Chat is not available.
Timezone: America/Los_Angeles