Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Symmetry and Geometry in Neural Representations

Physics Priors in Machine Learning

Max Welling

[ ]
Sat 16 Dec 12:30 p.m. PST — 1 p.m. PST

Abstract:

Good neural architectures are rooted in good inductive biases (a.k.a. priors). Equivariance under symmetries is a prime example of a successful physics inspired prior which sometimes dramatically reduces the number of examples needed to learn predictive models. Diffusion based models, one of the most successful generative models, are rooted in nonequilibrium statistical mechanics. Conversely, ML methods have recently been used to solve PDEs for example in weather prediction, and to accelerate MD simulations by learning the (quantum mechanical) interactions between atoms and electrons.

In this work we will try to extend this thinking to more flexible priors in the hidden variables of a neural network. In particular, we will impose wavelike dynamics in hidden variables under transformations of the inputs, which relaxes the stricter notion of equivariance. We find that under certain conditions, wavelike dynamics naturally arises in these hidden representations. We formalize this idea in a VAE-over-time architecture where the hidden dynamics is described by a Fokker-Planck (a.k.a. drift-diffusion) equation. This in turn leads to a new definition of a disentangled hidden representation of input states that can easily be manipulated to undergo transformations.

Chat is not available.