Timezone: »
Poster
Non-Asymptotic Analysis for Two Time-scale TDC with General Smooth Function Approximation
Yue Wang · Shaofeng Zou · Yi Zhou
Temporal-difference learning with gradient correction (TDC) is a two time-scale algorithm for policy evaluation in reinforcement learning. This algorithm was initially proposed with linear function approximation, and was later extended to the one with general smooth function approximation. The asymptotic convergence for the on-policy setting with general smooth function approximation was established in [Bhatnagar et al., 2009], however, the non-asymptotic convergence analysis remains unsolved due to challenges in the non-linear and two-time-scale update structure, non-convex objective function and the projection onto a time-varying tangent plane. In this paper, we develop novel techniques to address the above challenges and explicitly characterize the non-asymptotic error bound for the general off-policy setting with i.i.d. or Markovian samples, and show that it converges as fast as $\mathcal O(1/\sqrt T)$ (up to a factor of $\mathcal O(\log T)$). Our approach can be applied to a wide range of value-based reinforcement learning algorithms with general smooth function approximation.
Author Information
Yue Wang (State University of New York, Buffalo)
Shaofeng Zou (University at Buffalo, the State University of New York)
Yi Zhou (University of Utah)
More from the Same Authors
-
2022 Poster: Finding Correlated Equilibrium of Constrained Markov Game: A Primal-Dual Approach »
Ziyi Chen · Shaocong Ma · Yi Zhou -
2021 Poster: Online Robust Reinforcement Learning with Model Uncertainty »
Yue Wang · Shaofeng Zou -
2020 Poster: A Statistical Mechanics Framework for Task-Agnostic Sample Design in Machine Learning »
Bhavya Kailkhura · Jayaraman Thiagarajan · Qunwei Li · Jize Zhang · Yi Zhou · Timo Bremer -
2020 Poster: Variance-Reduced Off-Policy TDC Learning: Non-Asymptotic Convergence Analysis »
Shaocong Ma · Yi Zhou · Shaofeng Zou -
2019 Poster: SpiderBoost and Momentum: Faster Variance Reduction Algorithms »
Zhe Wang · Kaiyi Ji · Yi Zhou · Yingbin Liang · Vahid Tarokh -
2019 Poster: Finite-Sample Analysis for SARSA with Linear Function Approximation »
Shaofeng Zou · Tengyu Xu · Yingbin Liang -
2019 Poster: Two Time-scale Off-Policy TD Learning: Non-asymptotic Analysis over Markovian Samples »
Tengyu Xu · Shaofeng Zou · Yingbin Liang