Skip to yearly menu bar Skip to main content


Poster

The Importance of Being Scalable: Efficient Architectures for Neural Network Potentials

Eric Qu · Aditi Krishnapriyan

East Exhibit Hall A-C #3910
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Scaling has been a critical factor in enhancing model performance and generalization across various fields of machine learning, such as natural language processing and computer vision.However, despite recent successes, in the domain of Neural Network Interatomic Potentials (NNIP), scalability remains a challenge due to hard-constrained rotational equivariance and fixed bond directional feature sets. In this work, we first investigate various scaling strategies for NNIP models. Our findings indicate that attention mechanisms are particularly effective in scaling NNIP models, along with a simple yet effective way to incorporate bond directional information into the model.Based on these insights, we propose a scalable NNIP architecture, Efficient Graph Attention Potential (EGAP). EGAP leverages highly-optimized multi-head self-attention mechanisms in graph neural networks, resulting in substantial increases in efficiency—5.3x speed up in inference time, 4.6x less in memory usage, and an order of magnitude fewer FLOPs—compared to existing equivariant NNIP models. In addition to efficiency, EGAP also achieves state-of-the-art performance on datasets including OC20 and MD22.

Live content is unavailable. Log in and register to view live content