Timezone: »

 
Poster
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL
Jinglin Chen · Aditya Modi · Akshay Krishnamurthy · Nan Jiang · Alekh Agarwal

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #315

We study reward-free reinforcement learning (RL) under general non-linear function approximation, and establish sample efficiency and hardness results under various standard structural assumptions. On the positive side, we propose the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free exploration under minimal structural assumptions, which covers the previously studied settings of linear MDPs (Jin et al., 2020b), linear completeness (Zanette et al., 2020b) and low-rank MDPs with unknown representation (Modi et al., 2021). Our analyses indicate that the explorability or reachability assumptions, previously made for the latter two settings, are not necessary statistically for reward-free exploration. On the negative side, we provide a statistical hardness result for both reward-free and reward-aware exploration under linear completeness assumptions when the underlying features are unknown, showing an exponential separation between low-rank and linear completeness settings.

Author Information

Jinglin Chen (University of Illinois Urbana-Champaign)
Aditya Modi (Microsoft)
Akshay Krishnamurthy (Microsoft)
Nan Jiang (University of Illinois at Urbana-Champaign)
Alekh Agarwal (Google Research)

More from the Same Authors