Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Accelerated Materials Design (AI4Mat-2023)

AdsorbRL: Deep Reinforcement Learning for Inverse Catalyst Design

Romain Lacombe · Khalid El-Awady

Keywords: [ goal conditioned RL ] [ Inverse Catalyst Design ] [ deep rl ] [ AI-guided design ]


Abstract:

Recent advances in Machine Learning-based DFT approximation have drastically accelerated computational adsorption energy estimation. New solutions to the climate crisis, however, hinges on the inverse material design problem: identifying new catalysts with precise desired adsorption energies for each target reaction. Here we introduce AdsorbRL, a Deep Reinforcement Learning (DRL) agent aiming to identify catalysts that best fit a target adsorption energy profile, trained using offline learning on the Materials Project and Open Catalyst 2020 data sets. While initial experiments with a DQN agent failed in a complex ternary compounds space, a simple Q-learning agent reaches near-optimal adsorption energy in element-wise traversal of the periodic table. Building on this insight, we introduce Random Edge Traversal to simplify the action space, and successfully train a single-objective DQN agent which improves target adsorption energy from random initial states by an average of 4.1 eV. We extend this approach to multi-objective, goal-conditioned learning, and train a DQN agent to identify materials with the highest (respectively lowest) adsorption energies possible for multiple simultaneous target adsorbates. We introduce a novel training scheme for this agent using objectives sub-sampling, and report experimental results which suggest improved performance in the multi-objective, goal-conditioned RL setup. Overall, our results demonstrate the strong potential of DRL agents to tackle the inverse catalyst design across complex chemical spaces.

Chat is not available.