Timezone: »
Poster
Implicit Transfer Operator Learning: Multiple Time-Resolution Models for Molecular Dynamics
Mathias Schreiner · Ole Winther · Simon Olsson
Event URL: https://github.com/olsson-group/ito »
Computing properties of molecular systems rely on estimating expectations of the (unnormalized) Boltzmann distribution. Molecular dynamics (MD) is a broadly adopted technique to approximate such quantities. However, stable simulations rely on very small integration time-steps ($10^{-15}\,\mathrm{s}$), whereas convergence of some moments, e.g. binding free energy or rates, might rely on sampling processes on time-scales as long as $10^{-1}\, \mathrm{s}$, and these simulations must be repeated for every molecular system independently. Here, we present Implict Transfer Operator (ITO) Learning, a framework to learn surrogates of the simulation process with multiple time-resolutions. We implement ITO with denoising diffusion probabilistic models with a new SE(3) equivariant architecture and show the resulting models can generate self-consistent stochastic dynamics across multiple time-scales, even when the system is only partially observed. Finally, we present a coarse-grained CG-SE3-ITO model which can quantitatively model all-atom molecular dynamics using only coarse molecular representations. As such, ITO provides an important step towards multiple time- and space-resolution acceleration of MD. Code is available at \href{https://github.com/olsson-group/ito}{https://github.com/olsson-group/ito}.
Computing properties of molecular systems rely on estimating expectations of the (unnormalized) Boltzmann distribution. Molecular dynamics (MD) is a broadly adopted technique to approximate such quantities. However, stable simulations rely on very small integration time-steps ($10^{-15}\,\mathrm{s}$), whereas convergence of some moments, e.g. binding free energy or rates, might rely on sampling processes on time-scales as long as $10^{-1}\, \mathrm{s}$, and these simulations must be repeated for every molecular system independently. Here, we present Implict Transfer Operator (ITO) Learning, a framework to learn surrogates of the simulation process with multiple time-resolutions. We implement ITO with denoising diffusion probabilistic models with a new SE(3) equivariant architecture and show the resulting models can generate self-consistent stochastic dynamics across multiple time-scales, even when the system is only partially observed. Finally, we present a coarse-grained CG-SE3-ITO model which can quantitatively model all-atom molecular dynamics using only coarse molecular representations. As such, ITO provides an important step towards multiple time- and space-resolution acceleration of MD. Code is available at \href{https://github.com/olsson-group/ito}{https://github.com/olsson-group/ito}.
Author Information
Mathias Schreiner (DTU)
Ole Winther (Technical University of Denmark)
Simon Olsson (Chalmers University of Technology)
More from the Same Authors
-
2020 Meetup: MeetUp: Copenhagen, Denmark »
Ole Winther -
2021 : Hierarchical Few-Shot Generative Models »
Giorgio Giannone · Ole Winther -
2022 : Machine Learning for Chemical Reactions \\A Dance of Datasets and Models »
Mathias Schreiner · Arghya Bhowmik · Tejs Vegge · Jonas Busk · Peter Bjørn Jørgensen · Ole Winther -
2022 : Identifying endogenous peptide receptors by combining structure and transmembrane topology prediction »
Felix Teufel · Jan Christian Refsgaard · Christian Toft Madsen · Carsten Stahlhut · Mads Grønborg · Dennis Madsen · Ole Winther -
2022 : Few-Shot Diffusion Models »
Giorgio Giannone · Didrik Nielsen · Ole Winther -
2023 : SecretoGen: towards prediction of signal peptides for efficient protein secretion »
Felix Teufel · Carsten Stahlhut · Jan Refsgaard · Henrik Nielsen · Ole Winther · Dennis Madsen -
2023 : Improving Precision in Language Models Learning from Invalid Samples »
Niels Larsen · Giorgio Giannone · Ole Winther · Kai Blin -
2023 : DiffEnc: Variational Diffusion with a Learned Encoder »
Beatrix M. G. Nielsen · Anders Christensen · Andrea Dittadi · Ole Winther -
2023 : Aligning Optimization Trajectories with Diffusion Models for Constrained Design Generation »
Giorgio Giannone · Akash Srivastava · Ole Winther · Faez Ahmed -
2023 Poster: Aligning Optimization Trajectories with Diffusion Models for Constrained Design Generation »
Giorgio Giannone · Akash Srivastava · Ole Winther · Faez Ahmed -
2019 Poster: BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling »
Lars Maaløe · Marco Fraccaro · Valentin Liévin · Ole Winther -
2018 Poster: Recurrent Relational Networks »
Rasmus Berg Palm · Ulrich Paquet · Ole Winther -
2017 : Panel Session »
Neil Lawrence · Finale Doshi-Velez · Zoubin Ghahramani · Yann LeCun · Max Welling · Yee Whye Teh · Ole Winther -
2017 Poster: A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning »
Marco Fraccaro · Simon Kamronn · Ulrich Paquet · Ole Winther -
2017 Spotlight: A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning »
Marco Fraccaro · Simon Kamronn · Ulrich Paquet · Ole Winther -
2017 Poster: Hash Embeddings for Efficient Word Representations »
Dan Tito Svenstrup · Jonas Hansen · Ole Winther