Timezone: »
We consider offline reinforcement learning, where the goal is to learn a decision making policy from logged data. Offline RL—particularly when coupled with (value) function approximation to allow for generalization in large/continuous state spaces—is becoming increasingly relevant in practice, because it avoids costly and time-consuming online data collection and is well-suited to safety-critical domains. Existing sample complexity guarantees for offline value function approximation methods typically require both (1) distributional assumptions (i.e., good coverage) and (2) representational assumptions (i.e., ability to represent some or all Q-value functions) stronger than what is required for supervised learning. However, the necessity of these conditions and the fundamental limits for offline RL are not well-understood in spite of decades of research. This led Chen and Jiang (2019) to conjecture that concentrability (the most standard notion of coverage) and realizability (the weakest representation condition) alone are not sufficient for sample-efficient offline RL. We resolve this conjecture in the positive by proving (information theoretically) that even if both concentrability and realizability are satisfied, any algorithm requires sample complexity polynomial in the size of the state space to learn a non-trivial policy.
Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions beyond what is required in classical supervised learning, and highlight a phenomenon called over-coverage which serves as a fundamental barrier for offline value function approximation methods.
Author Information
Dylan Foster (Microsoft Research)
Akshay Krishnamurthy (Microsoft)
David Simchi-Levi (MIT)
Yunzong Xu (MIT)
More from the Same Authors
-
2021 Spotlight: Bayesian decision-making under misspecified priors with applications to meta-learning »
Max Simchowitz · Christopher Tosh · Akshay Krishnamurthy · Daniel Hsu · Thodoris Lykouris · Miro Dudik · Robert Schapire -
2022 : Hybrid RL: Using both offline and online data can make RL efficient »
Yuda Song · Yifei Zhou · Ayush Sekhari · J. Bagnell · Akshay Krishnamurthy · Wen Sun -
2022 Poster: A Simple and Optimal Policy Design for Online Learning with Safety against Heavy-tailed Risk »
David Simchi-Levi · Zeyu Zheng · Feng Zhu -
2022 Poster: On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL »
Jinglin Chen · Aditya Modi · Akshay Krishnamurthy · Nan Jiang · Alekh Agarwal -
2022 Poster: Learning Mixed Multinomial Logits with Provable Guarantees »
Yiqun Hu · David Simchi-Levi · Zhenzhen Yan -
2022 Poster: Context-Based Dynamic Pricing with Partially Linear Demand Model »
Jinzhi Bu · David Simchi-Levi · Chonghuan Wang -
2021 : Contributed Talk 3: Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation »
Yunzong Xu · Akshay Krishnamurthy · David Simchi-Levi -
2021 Poster: Gone Fishing: Neural Active Learning with Fisher Embeddings »
Jordan Ash · Surbhi Goel · Akshay Krishnamurthy · Sham Kakade -
2021 Oral: Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination »
Dylan Foster · Akshay Krishnamurthy -
2021 Poster: Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination »
Dylan Foster · Akshay Krishnamurthy -
2021 Poster: Bayesian decision-making under misspecified priors with applications to meta-learning »
Max Simchowitz · Christopher Tosh · Akshay Krishnamurthy · Daniel Hsu · Thodoris Lykouris · Miro Dudik · Robert Schapire -
2020 Poster: Provably adaptive reinforcement learning in metric spaces »
Tongyi Cao · Akshay Krishnamurthy -
2020 Poster: Efficient Contextual Bandits with Continuous Actions »
Maryam Majzoubi · Chicheng Zhang · Rajan Chari · Akshay Krishnamurthy · John Langford · Aleksandrs Slivkins -
2020 Poster: FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs »
Alekh Agarwal · Sham Kakade · Akshay Krishnamurthy · Wen Sun -
2020 Poster: Learning the Linear Quadratic Regulator from Nonlinear Observations »
Zakaria Mhammedi · Dylan Foster · Max Simchowitz · Dipendra Misra · Wen Sun · Akshay Krishnamurthy · Alexander Rakhlin · John Langford -
2020 Poster: Sample-Efficient Reinforcement Learning of Undercomplete POMDPs »
Chi Jin · Sham Kakade · Akshay Krishnamurthy · Qinghua Liu -
2020 Spotlight: Sample-Efficient Reinforcement Learning of Undercomplete POMDPs »
Chi Jin · Sham Kakade · Akshay Krishnamurthy · Qinghua Liu -
2020 Oral: FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs »
Alekh Agarwal · Sham Kakade · Akshay Krishnamurthy · Wen Sun -
2020 Poster: Information Theoretic Regret Bounds for Online Nonlinear Control »
Sham Kakade · Akshay Krishnamurthy · Kendall Lowrey · Motoya Ohnishi · Wen Sun -
2019 : Coffee break, posters, and 1-on-1 discussions »
Yangyi Lu · Daniel Chen · Hongseok Namkoong · Marie Charpignon · Maja Rudolph · Amanda Coston · Julius von Kügelgen · Niranjani Prasad · Paramveer Dhillon · Yunzong Xu · Yixin Wang · Alexander Markham · David Rohde · Rahul Singh · Zichen Zhang · Negar Hassanpour · Ankit Sharma · Ciarán Lee · Jean Pouget-Abadie · Jesse Krijthe · Divyat Mahajan · Nan Rosemary Ke · Peter Wirnsberger · Vira Semenova · Dmytro Mykhaylov · Dennis Shen · Kenta Takatsu · Liyang Sun · Jeremy Yang · Alexander Franks · Pak Kan Wong · Tauhid Zaman · Shira Mitchell · min kyoung kang · Qi Yang -
2019 Poster: Sample Complexity of Learning Mixture of Sparse Linear Regressions »
Akshay Krishnamurthy · Arya Mazumdar · Andrew McGregor · Soumyabrata Pal -
2019 Poster: Model Selection for Contextual Bandits »
Dylan Foster · Akshay Krishnamurthy · Haipeng Luo -
2019 Spotlight: Model Selection for Contextual Bandits »
Dylan Foster · Akshay Krishnamurthy · Haipeng Luo -
2019 Poster: Phase Transitions and Cyclic Phenomena in Bandits with Switching Constraints »
David Simchi-Levi · Yunzong Xu -
2018 Poster: Contextual bandits with surrogate losses: Margin bounds and efficient algorithms »
Dylan Foster · Akshay Krishnamurthy -
2018 Poster: On Oracle-Efficient PAC RL with Rich Observations »
Christoph Dann · Nan Jiang · Akshay Krishnamurthy · Alekh Agarwal · John Langford · Robert Schapire -
2018 Spotlight: On Oracle-Efficient PAC RL with Rich Observations »
Christoph Dann · Nan Jiang · Akshay Krishnamurthy · Alekh Agarwal · John Langford · Robert Schapire -
2018 Poster: The Lingering of Gradients: How to Reuse Gradients Over Time »
Zeyuan Allen-Zhu · David Simchi-Levi · Xinshang Wang -
2017 Poster: Off-policy evaluation for slate recommendation »
Adith Swaminathan · Akshay Krishnamurthy · Alekh Agarwal · Miro Dudik · John Langford · Damien Jose · Imed Zitouni -
2017 Oral: Off-policy evaluation for slate recommendation »
Adith Swaminathan · Akshay Krishnamurthy · Alekh Agarwal · Miro Dudik · John Langford · Damien Jose · Imed Zitouni