Keywords: [ Reinforcement Learning and Planning ] [ Theory ] [ Generative Model ]

[
Abstract
]
Oral presentation:
Oral Session 2: Reinforcement Learning

Tue 7 Dec 1 a.m. PST — 2 a.m. PST

[
OpenReview]

Thu 9 Dec 4:30 p.m. PST — 6 p.m. PST

Tue 7 Dec 1 a.m. PST — 2 a.m. PST

Abstract:
A fundamental question in the theory of reinforcement learning is: suppose the optimal $Q$-function lies in the linear span of a given $d$ dimensional feature mapping, is sample-efficient reinforcement learning (RL) possible? The recent and remarkable result of Weisz et al. (2020) resolves this question in the negative, providing an exponential (in $d$) sample size lower bound, which holds even if the agent has access to a generative model of the environment. One may hope that such a lower can be circumvented with an even stronger assumption that there is a \emph{constant gap} between the optimal $Q$-value of the best action and that of the second-best action (for all states); indeed, the construction in Weisz et al. (2020) relies on having an exponentially small gap. This work resolves this subsequent question, showing that an exponential sample complexity lower bound still holds even if a constant gap is assumed. Perhaps surprisingly, this result implies an exponential separation between the online RL setting and the generative model setting, where sample-efficient RL is in fact possible in the latter setting with a constant gap. Complementing our negative hardness result, we give two positive results showing that provably sample-efficient RL is possible either under an additional low-variance assumption or under a novel hypercontractivity assumption.

Chat is not available.