Timezone: »
Temporal difference (TD) learning with function approximations (linear functions or neural networks) has achieved remarkable empirical success, giving impetus to the development of finite-time analysis. As an accelerated version of TD, the adaptive TD has been proposed and proved to enjoy finite-time convergence under the linear function approximation. Existing numerical results have demonstrated the superiority of adaptive algorithms to vanilla ones. Nevertheless, the performance guarantee of adaptive TD with neural network approximation remains widely unknown. This paper establishes the finite-time analysis for the adaptive TD with multi-layer ReLU network approximation whose samples are generated from a Markov decision process. Our established theory shows that if the width of the deep neural network is large enough, the adaptive TD using neural network approximation can find the (optimal) value function with high probabilities under the same iteration complexity as TD in general cases. Furthermore, we show that the adaptive TD using neural network approximation, with the same width and searching area, can achieve theoretical acceleration when the stochastic semi-gradients decay fast.
Author Information
Tao Sun (National university of defense technology)
College of Science, National University of Defense Technology, PRC.
Dongsheng Li (School of Computer Science, National University of Defense Technology)
Bao Wang (University of Utah)
More from the Same Authors
-
2021 Poster: FMMformer: Efficient and Flexible Transformer via Decomposed Near-field and Far-field Attention »
Tan Nguyen · Vai Suliafu · Stanley Osher · Long Chen · Bao Wang -
2021 Poster: Heavy Ball Neural Ordinary Differential Equations »
Hedi Xia · Vai Suliafu · Hangjie Ji · Tan Nguyen · Andrea Bertozzi · Stanley Osher · Bao Wang -
2020 Poster: MomentumRNN: Integrating Momentum into Recurrent Neural Networks »
Tan Nguyen · Richard Baraniuk · Andrea Bertozzi · Stanley Osher · Bao Wang -
2019 Poster: General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme »
Tao Sun · Yuejiao Sun · Dongsheng Li · Qing Liao -
2019 Poster: ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies »
Bao Wang · Zuoqiang Shi · Stanley Osher -
2018 Poster: LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning »
Tianyi Chen · Georgios Giannakis · Tao Sun · Wotao Yin -
2018 Spotlight: LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning »
Tianyi Chen · Georgios Giannakis · Tao Sun · Wotao Yin -
2018 Poster: On Markov Chain Gradient Descent »
Tao Sun · Yuejiao Sun · Wotao Yin -
2018 Poster: Deep Neural Nets with Interpolating Function as Output Activation »
Bao Wang · Xiyang Luo · Zhen Li · Wei Zhu · Zuoqiang Shi · Stanley Osher -
2017 Poster: Asynchronous Coordinate Descent under More Realistic Assumptions »
Tao Sun · Robert Hannah · Wotao Yin