Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning and the Physical Sciences

Reducing Down(stream)time: Pretraining Molecular GNNs using Heterogeneous AI Accelerators

Jenna A Bilbrey · Kristina Herman · Henry Sprueill · Sotiris Xantheas · Payel Das · Manuel Lopez Roldan · Mike Kraus · Hatem Helal · Sutanay Choudhury


Abstract:

Recent advancements in self-supervised learning and transfer learning methods have popularized approaches that involve pretraining models from massive data sources and subsequent finetuning of such models towards a specific task. While such approaches have become the norm in fields such as natural language processing, implementation and evaluation of transfer learning approaches for chemistry are in the early stages. In this work, we demonstrate finetuning for downstream tasks on a graph neural network (GNN) trained over a molecular database containing 2.7 million water clusters. The use of Graphcore IPUs as an AI accelerator for training molecular GNNs reduces training time from a reported 2.7 days on 0.5M clusters to 92 minutes on 2.7M clusters. Finetuning the pretrained model for downstream tasks of molecular dynamics and level-of-theory transfer took only 8.3 hours and 28 minutes, respectively, on a single GPU.

Chat is not available.