Timezone: »
Recently, utilizing reinforcement learning (RL) to generate molecules with desired properties has been highlighted as a promising strategy for drug design. Molecular docking program -- a physical simulation that estimates protein-small molecule binding affinity -- can be an ideal reward scoring function for RL, as it is a straightforward proxy of the therapeutic potential. Still, two imminent challenges exist for this task. First, the models often fail to generate chemically realistic and pharmacochemically acceptable molecules. Second, the docking score optimization is a difficult exploration problem that involves many local optima and less smooth surface with respect to molecular structure. To tackle these challenges, we propose a novel RL framework that generates pharmacochemically acceptable molecules with large docking scores. Our method -- Fragment-based generative RL with Explorative Experience replay for Drug design (FREED) -- constrains the generated molecules to a realistic and qualified chemical space and effectively explores the space to find drugs by coupling our fragment-based generation method and a novel error-prioritized experience replay (PER). We also show that our model performs well on both de novo and scaffold-based schemes. Our model produces molecules of higher quality compared to existing methods while achieving state-of-the-art performance on two of three targets in terms of the docking scores of the generated molecules. We further show with ablation studies that our method, predictive error-PER (FREED(PE)), significantly improves the model performance.
Author Information
Soojung Yang (MIT)
Doyeong Hwang (Onepredict)
Seul Lee (Korea Advanced Institute of Science and Technology)
Seongok Ryu (Korea Advanced Institute of Science and Technology)
Sung Ju Hwang (KAIST, AITRICS)
More from the Same Authors
-
2021 Spotlight: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning »
Hayeon Lee · Sewoong Lee · Song Chong · Sung Ju Hwang -
2021 Spotlight: Task-Adaptive Neural Network Search with Meta-Contrastive Learning »
Wonyong Jeong · Hayeon Lee · Geon Park · Eunyoung Hyung · Jinheon Baek · Sung Ju Hwang -
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2022 Poster: Learning to Generate Inversion-Resistant Model Explanations »
Hoyong Jeong · Suyoung Lee · Sung Ju Hwang · Sooel Son -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2022 Poster: Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching »
Wonyong Jeong · Sung Ju Hwang -
2022 Poster: Graph Self-supervised Learning with Accurate Discrepancy Learning »
Dongki Kim · Jinheon Baek · Sung Ju Hwang -
2022 Poster: Set-based Meta-Interpolation for Few-Task Meta-Learning »
Seanie Lee · Bruno Andreis · Kenji Kawaguchi · Juho Lee · Sung Ju Hwang -
2021 Poster: Edge Representation Learning with Hypergraphs »
Jaehyeong Jo · Jinheon Baek · Seul Lee · Dongki Kim · Minki Kang · Sung Ju Hwang -
2021 Poster: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning »
Hayeon Lee · Sewoong Lee · Song Chong · Sung Ju Hwang -
2021 Poster: Task-Adaptive Neural Network Search with Meta-Contrastive Learning »
Wonyong Jeong · Hayeon Lee · Geon Park · Eunyoung Hyung · Jinheon Baek · Sung Ju Hwang -
2021 Poster: Mini-Batch Consistent Slot Set Encoder for Scalable Set Encoding »
Bruno Andreis · Jeffrey Willette · Juho Lee · Sung Ju Hwang