Timezone: »

RLCG: When Reinforcement Learning Meets Coarse Graining
Shenghao Wu · Tianyi Liu · Zhirui Wang · Wen Yan · Yingxiang Yang
Event URL: https://openreview.net/forum?id=XD6BnJO7PW »

Coarse graining (CG) algorithms have been widely used to speed up molecular dynamics (MD) simulations. Recent data-driven CG algorithms have demonstrated competitive performances to empirical CG methods. However, these data-driven algorithms often rely heavily on labeled information (e.g., force), which is sometimes unavailable, and may not scale to large and complex molecular systems. In this paper, we propose Reinforcement Learning for Coarse Graining (RLCG), a reinforcement-learning-based framework for learning CG mappings. Particularly, RLCG makes CG assignments based on local information of each atom and is trained using a novel reward function. This "atom-centric" approach may substantially improve the computational scalability. We showcase the power of RLCG by demonstrating its competitive performance against the state-of-the-arts on small (Alanine Dipeptide and Paracetamol) and medium-sized (Chignolin) molecules. More broadly, RLCG has great potential in accelerating the scientific discovery cycle, especially on large-scale problems.

Author Information

Shenghao Wu (Carnegie Mellon University)
Tianyi Liu (ByteDance)
Zhirui Wang (Princeton University Plasma Physics Lab)
Wen Yan (Bytedance Inc.)
Yingxiang Yang (ByteDance Inc)

More from the Same Authors