Timezone: »

BooVI: Provably Efficient Bootstrapped Value Iteration
Boyi Liu · Qi Cai · Zhuoran Yang · Zhaoran Wang

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @
Despite the tremendous success of reinforcement learning (RL) with function approximation, efficient exploration remains a significant challenge, both practically and theoretically. In particular, existing theoretically grounded RL algorithms based on upper confidence bounds (UCBs), such as optimistic least-squares value iteration (LSVI), are often incompatible with practically powerful function approximators, such as neural networks. In this paper, we develop a variant of \underline{boo}tstrapped LS\underline{VI}, namely BooVI, which bridges such a gap between practice and theory. Practically, BooVI drives exploration through (re)sampling, making it compatible with general function approximators. Theoretically, BooVI inherits the worst-case $\tilde{O}(\sqrt{d^3 H^3 T})$-regret of optimistic LSVI in the episodic linear setting. Here $d$ is the feature dimension, $H$ is the episode horizon, and $T$ is the total number of steps.

Author Information

Boyi Liu (Northwestern University)
Qi Cai (Northwestern University)
Zhuoran Yang (Princeton)
Zhaoran Wang (Princeton University)

More from the Same Authors