Timezone: »
Long-term temporal credit assignment is an important challenge in deep reinforcement learning (RL). It refers to the ability of the agent to attribute actions to consequences that may occur after a long time interval. Existing policy-gradient and Q-learning algorithms typically rely on dense environmental rewards that provide rich short-term supervision and help with credit assignment. However, they struggle to solve tasks with delays between an action and the corresponding rewarding feedback. To make credit assignment easier, recent works have proposed algorithms to learn dense "guidance" rewards that could be used in place of the sparse or delayed environmental rewards. This paper is in the same vein -- starting with a surrogate RL objective that involves smoothing in the trajectory-space, we arrive at a new algorithm for learning guidance rewards. We show that the guidance rewards have an intuitive interpretation, and can be obtained without training any additional neural networks. Due to the ease of integration, we use the guidance rewards in a few popular algorithms (Q-learning, Actor-Critic, Distributional-RL) and present results in single-agent and multi-agent tasks that elucidate the benefit of our approach when the environmental rewards are sparse or delayed.
Author Information
Tanmay Gangwani (University of Illinois, Urbana-Champaign)
I am a Ph.D. student in Computer Science at the University of Illinois, Urbana Champaign, supervised by Jian Peng. I'm interested in machine learning, especially Reinforcement Learning. My research is mainly focused on designing algorithms which efficiently leverage expert demonstrations for RL (imitation learning), address the exploration challenge in complex environment, and use generative modeling methods for model-based RL. For details, please visit https://tgangwani.github.io
Yuan Zhou (UIUC)
Jian Peng (University of Illinois at Urbana-Champaign)
More from the Same Authors
-
2021 : Imitation Learning from Observations under Transition Model Disparity »
Tanmay Gangwani · Yuan Zhou · Jian Peng -
2021 : Hindsight Foresight Relabeling for Meta-Reinforcement Learning »
Michael Wan · Jian Peng · Tanmay Gangwani -
2022 Poster: Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation »
Zhizhou Ren · Anji Liu · Yitao Liang · Jian Peng · Jianzhu Ma -
2022 Poster: Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures »
Shitong Luo · Yufeng Su · Xingang Peng · Sheng Wang · Jian Peng · Jianzhu Ma -
2021 Poster: A 3D Generative Model for Structure-Based Drug Design »
Shitong Luo · Jiaqi Guan · Jianzhu Ma · Jian Peng -
2020 Poster: Almost Optimal Model-Free Reinforcement Learningvia Reference-Advantage Decomposition »
Zihan Zhang · Yuan Zhou · Xiangyang Ji -
2020 Poster: Off-Policy Interval Estimation with Lipschitz Value Iteration »
Ziyang Tang · Yihao Feng · Na Zhang · Jian Peng · Qiang Liu -
2019 Poster: Thresholding Bandit with Optimal Aggregate Regret »
Chao Tao · Saúl Blanco · Jian Peng · Yuan Zhou -
2019 Poster: Exploration via Hindsight Goal Generation »
Zhizhou Ren · Kefan Dong · Yuan Zhou · Qiang Liu · Jian Peng