Timezone: »

 
Spotlight
Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning
Fei Feng · Ruosong Wang · Wotao Yin · Simon Du · Lin Yang

Mon Dec 07 07:50 PM -- 08:00 PM (PST) @ Orals & Spotlights: Reinforcement Learning

Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems [tang2017exploration,bellemare2016unifying], we investigate when this paradigm is provably efficient. We study episodic Markov decision processes with rich observations generated from a small number of latent states. We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a no-regret tabular RL algorithm. Theoretically, we prove that as long as the unsupervised learning algorithm enjoys a polynomial sample complexity guarantee, we can find a near-optimal policy with sample complexity polynomial in the number of latent states, which is significantly smaller than the number of observations. Empirically, we instantiate our framework on a class of hard exploration problems to demonstrate the practicality of our theory.

Author Information

Fei Feng (University of California, Los Angeles)
Ruosong Wang (Carnegie Mellon University)
Wotao Yin (Alibaba US, DAMO Academy)
Simon Du (Institute for Advanced Study)
Lin Yang (UCLA)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors