Timezone: »
SARSA is an on-policy algorithm to learn a Markov decision process policy in reinforcement learning. We investigate the SARSA algorithm with linear function approximation under the non-i.i.d.\ setting, where a single sample trajectory is available. With a Lipschitz continuous policy improvement operator that is smooth enough, SARSA has been shown to converge asymptotically. However, its non-asymptotic analysis is challenging and remains unsolved due to the non-i.i.d. samples, and the fact that the behavior policy changes dynamically with time. In this paper, we develop a novel technique to explicitly characterize the stochastic bias of a type of stochastic approximation procedures with time-varying Markov transition kernels. Our approach enables non-asymptotic convergence analyses of this type of stochastic approximation algorithms, which may be of independent interest. Using our bias characterization technique and a gradient descent type of analysis, we further provide the finite-sample analysis on the mean square error of the SARSA algorithm. In the end, we present a fitted SARSA algorithm, which includes the original SARSA algorithm and its variant as special cases. This fitted SARSA algorithm provides a framework for \textit{iterative} on-policy fitted policy iteration, which is more memory and computationally efficient. For this fitted SARSA algorithm, we also present its finite-sample analysis.
Author Information
Shaofeng Zou (University at Buffalo, the State University of New York)
Tengyu Xu (The Ohio State University)
Yingbin Liang (The Ohio State University)
More from the Same Authors
-
2021 Spotlight: Provably Faster Algorithms for Bilevel Optimization »
Junjie Yang · Kaiyi Ji · Yingbin Liang -
2022 Poster: Provable Generalization of Overparameterized Meta-learning Trained with SGD »
Yu Huang · Yingbin Liang · Longbo Huang -
2022 : Online Min-max Optimization: Nonconvexity, Nonstationarity, and Dynamic Regret »
Yu Huang · Yuan Cheng · Yingbin Liang · Longbo Huang -
2022 Spotlight: Will Bilevel Optimizers Benefit from Loops »
Kaiyi Ji · Mingrui Liu · Yingbin Liang · Lei Ying -
2022 Spotlight: Lightning Talks 3B-2 »
Yu Huang · Tero Karras · Maxim Kodryan · Shiau Hong Lim · Shudong Huang · Ziyu Wang · Siqiao Xue · ILYAS MALIK · Ekaterina Lobacheva · Miika Aittala · Hongjie Wu · Yuhao Zhou · Yingbin Liang · Xiaoming Shi · Jun Zhu · Maksim Nakhodnov · Timo Aila · Yazhou Ren · James Zhang · Longbo Huang · Dmitry Vetrov · Ivor Tsang · Hongyuan Mei · Samuli Laine · Zenglin Xu · Wentao Feng · Jiancheng Lv -
2022 Spotlight: Provable Generalization of Overparameterized Meta-learning Trained with SGD »
Yu Huang · Yingbin Liang · Longbo Huang -
2022 Spotlight: Lightning Talks 1A-3 »
Kimia Noorbakhsh · Ronan Perry · Qi Lyu · Jiawei Jiang · Christian Toth · Olivier Jeunen · Xin Liu · Yuan Cheng · Lei Li · Manuel Rodriguez · Julius von Kügelgen · Lars Lorch · Nicolas Donati · Lukas Burkhalter · Xiao Fu · Zhongdao Wang · Songtao Feng · Ciarán Gilligan-Lee · Rishabh Mehrotra · Fangcheng Fu · Jing Yang · Bernhard Schölkopf · Ya-Li Li · Christian Knoll · Maks Ovsjanikov · Andreas Krause · Shengjin Wang · Hong Zhang · Mounia Lalmas · Bolin Ding · Bo Du · Yingbin Liang · Franz Pernkopf · Robert Peharz · Anwar Hithnawi · Julius von Kügelgen · Bo Li · Ce Zhang -
2022 Spotlight: Provable Benefit of Multitask Representation Learning in Reinforcement Learning »
Yuan Cheng · Songtao Feng · Jing Yang · Hong Zhang · Yingbin Liang -
2022 Poster: A Unifying Framework of Off-Policy General Value Function Evaluation »
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin Liang -
2022 Poster: On the Convergence Theory for Hessian-Free Bilevel Algorithms »
Daouda Sow · Kaiyi Ji · Yingbin Liang -
2022 Poster: Provable Benefit of Multitask Representation Learning in Reinforcement Learning »
Yuan Cheng · Songtao Feng · Jing Yang · Hong Zhang · Yingbin Liang -
2022 Poster: Will Bilevel Optimizers Benefit from Loops »
Kaiyi Ji · Mingrui Liu · Yingbin Liang · Lei Ying -
2021 Poster: Online Robust Reinforcement Learning with Model Uncertainty »
Yue Wang · Shaofeng Zou -
2021 Poster: Faster Non-asymptotic Convergence for Double Q-learning »
Lin Zhao · Huaqing Xiong · Yingbin Liang -
2021 Poster: Provably Faster Algorithms for Bilevel Optimization »
Junjie Yang · Kaiyi Ji · Yingbin Liang -
2021 Poster: Non-Asymptotic Analysis for Two Time-scale TDC with General Smooth Function Approximation »
Yue Wang · Shaofeng Zou · Yi Zhou -
2020 Poster: Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters »
Kaiyi Ji · Jason Lee · Yingbin Liang · H. Vincent Poor -
2020 Poster: Improving Sample Complexity Bounds for (Natural) Actor-Critic Algorithms »
Tengyu Xu · Zhe Wang · Yingbin Liang -
2020 Poster: Finite-Time Analysis for Double Q-learning »
Huaqing Xiong · Lin Zhao · Yingbin Liang · Wei Zhang -
2020 Spotlight: Finite-Time Analysis for Double Q-learning »
Huaqing Xiong · Lin Zhao · Yingbin Liang · Wei Zhang -
2020 Poster: Variance-Reduced Off-Policy TDC Learning: Non-Asymptotic Convergence Analysis »
Shaocong Ma · Yi Zhou · Shaofeng Zou -
2019 Poster: SpiderBoost and Momentum: Faster Variance Reduction Algorithms »
Zhe Wang · Kaiyi Ji · Yi Zhou · Yingbin Liang · Vahid Tarokh -
2019 Poster: Two Time-scale Off-Policy TD Learning: Non-asymptotic Analysis over Markovian Samples »
Tengyu Xu · Shaofeng Zou · Yingbin Liang -
2018 Poster: Convergence of Cubic Regularization for Nonconvex Optimization under KL Property »
Yi Zhou · Zhe Wang · Yingbin Liang -
2018 Spotlight: Convergence of Cubic Regularization for Nonconvex Optimization under KL Property »
Yi Zhou · Zhe Wang · Yingbin Liang -
2018 Poster: Minimax Estimation of Neural Net Distance »
Kaiyi Ji · Yingbin Liang