Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: AI for Accelerated Materials Design (AI4Mat-2023)

MatSciML: A Broad, Multi-Task Benchmark for Solid-State Materials Modeling

Kin Long Kelvin Lee · Carmelo Gonzales · Marcel Nassar · Matthew Spellings · Michael Galkin · Santiago Miret

Keywords: [ solid-state materials ] [ open-source data ] [ crystal structures ] [ Generative Models ] [ multitask learning ]

[ ] [ Project Page ]
Fri 15 Dec 7:40 a.m. PST — 7:50 a.m. PST

Abstract:

We propose MatSci ML, a novel benchmark for modeling Materials Science using Machine Learning methods focused on solid-state materials with periodic crystal structures. Applying machine learning methods to solid-state materials is a nascent field with substantial fragmentation largely driven by the great variety of datasets used to develop machine learning models. This fragmentation makes comparing the performance and generalizability of different methods difficult, thereby hindering overall research progress in the field. Building on top of open-source datasets, including large-scale datasets like the OpenCatalyst, OQMD, NOMAD, the Carolina Materials Database, and Materials Project, the MatSci ML benchmark provides a diverse set of materials systems and properties data for model training and evaluation, including simulated energies, atomic forces, material bandgaps, as well as classification data for crystal symmetries via space groups. The diversity of properties in MatSci ML makes the implementation and evaluation of multi-task learning algorithms for solid-state materials possible, while the diversity of datasets facilitates the development of new, more generalized algorithms and methods across multiple datasets. In the multi-dataset learning setting, MatSci ML enables researchers to combine observations from multiple datasets to perform joint prediction of common properties, such as energy and forces. Using MatSci ML, we evaluate the performance of different graph neural networks and equivariant point cloud networks on several benchmark tasks spanning single task, multitask, and multi-data learning scenarios.

Chat is not available.