Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Progress and Promises

RLCG: When Reinforcement Learning Meets Coarse Graining

Shenghao Wu · Tianyi Liu · Zhirui Wang · Wen Yan · Yingxiang Yang

Keywords: [ graph neural networks ] [ Reinforcement Learning ] [ coarse graining ] [ Molecular dynamics ]


Abstract:

Coarse graining (CG) algorithms have been widely used to speed up molecular dynamics (MD) simulations. Recent data-driven CG algorithms have demonstrated competitive performances to empirical CG methods. However, these data-driven algorithms often rely heavily on labeled information (e.g., force), which is sometimes unavailable, and may not scale to large and complex molecular systems. In this paper, we propose Reinforcement Learning for Coarse Graining (RLCG), a reinforcement-learning-based framework for learning CG mappings. Particularly, RLCG makes CG assignments based on local information of each atom and is trained using a novel reward function. This "atom-centric" approach may substantially improve the computational scalability. We showcase the power of RLCG by demonstrating its competitive performance against the state-of-the-arts on small (Alanine Dipeptide and Paracetamol) and medium-sized (Chignolin) molecules. More broadly, RLCG has great potential in accelerating the scientific discovery cycle, especially on large-scale problems.

Chat is not available.