Timezone: »
Improving the sample efficiency of reinforcement learning algorithms requires effective exploration. Following the principle of $\textit{optimism in the face of uncertainty}$ (OFU), we train a separate exploration policy to maximize the approximate upper confidence bound of the critics in an off-policy actor-critic framework. However, this introduces extra differences between the replay buffer and the target policy regarding their stationary state-action distributions. To mitigate the off-policy-ness, we adapt the recently introduced DICE framework to learn a distribution correction ratio for off-policy RL training. In particular, we correct the training distribution for both policies and critics. Empirically, we evaluate our proposed method in several challenging continuous control tasks and show superior performance compared to state-of-the-art methods. We also conduct extensive ablation studies to demonstrate the effectiveness and rationality of the proposed method.
Author Information
Jiachen Li (University of California, Santa Barbara)
Jiachen Li is a second-year Ph.D. student at UC Santa Barbara working with Prof. William Wang. I received my M.S. degree in Electrical and Computer Engineering at UC San Diego, advised by Prof. Hao Su and Prof. Pengtao Xie, and my B.E. degree from Huazhong University of Science and Technology as an Outstanding Undergraduate in Terms of Academic Performance (Top 1%).
Shuo Cheng (Georgia Institute of Technology)
Zhenyu Liao (Amazon Advertising)
Huayan Wang (Kuaishou Technology)
William Yang Wang (University of California, Santa Barbara)
William Wang is the Co-Director of UC Santa Barbara's Natural Language Processing group and Center for Responsible Machine Learning. He is the Duncan and Suzanne Mellichamp Chair in Artificial Intelligence and Designs, and an Associate Professor in the Department of Computer Science at the University of California, Santa Barbara. He received his PhD from School of Computer Science, Carnegie Mellon University. He has broad interests in Artificial Intelligence, including statistical relational learning, information extraction, computational social science, dialog & generation, and vision. He has published more than 100 papers at leading NLP/AI/ML conferences and journals, and received best paper awards (or nominations) at ASRU 2013, CIKM 2013, EMNLP 2015, and CVPR 2019, a DARPA Young Faculty Award (Class of 2018), an IEEE AI's 10 to Watch Award (Class of 2020), an NSF CAREER Award (2021), two Google Faculty Research Awards (2018, 2019), three IBM Faculty Awards (2017-2019), two Facebook Research Awards (2018, 2019), an Amazon AWS Machine Learning Research Award, a JP Morgan Chase Faculty Research Award, an Adobe Research Award in 2018, and the Richard King Mellon Presidential Fellowship in 2011. He frequently serves as an Area Chair or Senior Area Chair for NAACL, ACL, EMNLP, and AAAI. He is an elected member of IEEE Speech and Language Processing Technical Committee (2021-2023) and a member of ACM Future of Computing Academy. In addition to research, William enjoys writing scientific articles that impact the broader online community. His work and opinions appear at major tech media outlets such as Wired, VICE, Scientific American, Fortune, Fast Company, NASDAQ, The Next Web, Law.com, and Mental Floss.
Qinxun Bai (Horizon Robotics)
More from the Same Authors
-
2021 : VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation »
Linjie Li · Jie Lei · Zhe Gan · Licheng Yu · Yen-Chun Chen · Rohit Pillai · Yu Cheng · Luowei Zhou · Xin Wang · William Yang Wang · Tamara L Berg · Mohit Bansal · Jingjing Liu · Lijuan Wang · Zicheng Liu -
2021 : A Dataset for Answering Time-Sensitive Questions »
Wenhu Chen · Xinyi Wang · William Yang Wang -
2022 : LAD: Language Augmented Diffusion for Reinforcement Learning »
Edwin Zhang · Yujie Lu · William Yang Wang · Amy Zhang -
2022 : Offline Reinforcement Learning with Closed-Form Policy Improvement Operators »
Jiachen Li · Edwin Zhang · Ming Yin · Qinxun Bai · Yu-Xiang Wang · William Yang Wang -
2022 : Guided Skill Learning and Abstraction for Long-Horizon Manipulation »
Shuo Cheng · Danfei Xu -
2023 Poster: Flexible Attention-Based Multi-Policy Fusion for Efficient Deep Reinforcement Learning »
Zih-Yun Chiu · Yi-Lin Tuan · William Yang Wang · Michael Yip -
2023 Poster: LayoutGPT: Compositional Visual Planning and Generation with Large Language Models »
Weixi Feng · Wanrong Zhu · Tsu-Jui Fu · Varun Jampani · Arjun Akula · Xuehai He · S Basu · Xin Wang · William Yang Wang -
2023 Poster: LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation »
Yujie Lu · Xianjun Yang · Xiujun Li · Xin Wang · William Yang Wang -
2023 Poster: ALGO: Synthesizing Algorithmic Programs with Generated Oracle Verifiers »
Kexun Zhang · Danqing Wang · Jingtao Xia · William Yang Wang · Lei Li -
2023 Poster: Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data »
Alon Albalak · Colin Raffel · William Yang Wang -
2023 Poster: Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning »
Xinyi Wang · Wanrong Zhu · Michael Saxon · Mark Steyvers · William Yang Wang -
2023 Poster: Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with Text »
Wanrong Zhu · Jack Hessel · Anas Awadalla · Samir Yitzhak Gadre · Jesse Dodge · Alex Fang · Youngjae Yu · Ludwig Schmidt · William Yang Wang · Yejin Choi -
2022 Poster: Society of Agents: Regret Bounds of Concurrent Thompson Sampling »
Yan Chen · Perry Dong · Qinxun Bai · Maria Dimakopoulou · Wei Xu · Zhengyuan Zhou -
2021 Poster: Local Explanation of Dialogue Response Generation »
Yi-Lin Tuan · Connor Pryor · Wenhu Chen · Lise Getoor · William Yang Wang -
2021 Poster: Counterfactual Maximum Likelihood Estimation for Training Deep Networks »
Xinyi Wang · Wenhu Chen · Michael Saxon · William Yang Wang