Timezone: »
Machine learning has enabled the prediction of quantum chemical properties with high accuracy and efficiency, allowing to bypass computationally costly ab initio calculations. Instead of training on a fixed set of properties, more recent approaches attempt to learn the electronic wavefunction (or density) as a central quantity of atomistic systems, from which all other observables can be derived. This is complicated by the fact that wavefunctions transform non-trivially under molecular rotations, which makes them a challenging prediction target. To solve this issue, we introduce general SE(3)-equivariant operations and building blocks for constructing deep learning architectures for geometric point cloud data and apply them to reconstruct wavefunctions of atomistic systems with unprecedented accuracy. Our model achieves speedups of over three orders of magnitude compared to ab initio methods and reduces prediction errors by up to two orders of magnitude compared to the previous state-of-the-art. This accuracy makes it possible to derive properties such as energies and forces directly from the wavefunction in an end-to-end manner. We demonstrate the potential of our approach in a transfer learning application, where a model trained on low accuracy reference wavefunctions implicitly learns to correct for electronic many-body interactions from observables computed at a higher level of theory. Such machine-learned wavefunction surrogates pave the way towards novel semi-empirical methods, offering resolution at an electronic level while drastically decreasing computational cost. Additionally, the predicted wavefunctions can serve as initial guess in conventional ab initio methods, decreasing the number of iterations required to arrive at a converged solution, thus leading to significant speedups without any loss of accuracy or robustness. While we focus on physics applications in this contribution, the proposed equivariant framework for deep learning on point clouds is promising also beyond, say, in computer vision or graphics.
Author Information
Oliver Unke (Google Research)
Mihail Bogojeski (TU Berlin)
Michael Gastegger (Technische Universität Berlin)
Mario Geiger (EPFL)
Tess Smidt (Berkeley Lab)
Klaus-Robert Müller (TU Berlin)
More from the Same Authors
-
2022 Poster: So3krates: Equivariant attention for interactions on arbitrary length-scales in molecular systems »
Thorben Frank · Oliver Unke · Klaus-Robert Müller -
2021 Poster: Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging »
Ali Hashemi · Yijing Gao · Chang Cai · Sanjay Ghosh · Klaus-Robert Müller · Srikantan Nagarajan · Stefan Haufe -
2021 Poster: Relative stability toward diffeomorphisms indicates performance in deep nets »
Leonardo Petrini · Alessandro Favero · Mario Geiger · Matthieu Wyart -
2020 : Panel »
Alan Aspuru-Guzik · Jennifer Listgarten · Klaus-Robert Müller · Nadine Schneider -
2020 : Invited Talk: Klaus Robert-Müller & Kristof Schütt: Machine Learning meets Quantum Chemistry »
Klaus-Robert Müller · Kristof Schütt -
2019 Poster: A General Theory of Equivariant CNNs on Homogeneous Spaces »
Taco Cohen · Mario Geiger · Maurice Weiler -
2019 Poster: Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules »
Niklas Gebauer · Michael Gastegger · Kristof Schütt -
2018 Workshop: Machine Learning for Molecules and Materials »
José Miguel Hernández-Lobato · Klaus-Robert Müller · Brooks Paige · Matt Kusner · Stefan Chmiela · Kristof Schütt -
2018 Poster: 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data »
Maurice Weiler · Wouter Boomsma · Mario Geiger · Max Welling · Taco Cohen -
2017 Workshop: Machine Learning for Molecules and Materials »
Kristof Schütt · Klaus-Robert Müller · Anatole von Lilienfeld · José Miguel Hernández-Lobato · Klaus-Robert Müller · Alan Aspuru-Guzik · Bharath Ramsundar · Matt Kusner · Brooks Paige · Stefan Chmiela · Alexandre Tkatchenko · Anatole von Lilienfeld · Koji Tsuda