`

Timezone: »

 
Poster
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs
Chi Jin · Sham Kakade · Akshay Krishnamurthy · Qinghua Liu

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #553
Partial observability is a common challenge in many reinforcement learning applications, which requires an agent to maintain memory, infer latent states, and integrate this past information into exploration. This challenge leads to a number of computational and statistical hardness results for learning general Partially Observable Markov Decision Processes (POMDPs). This work shows that these hardness barriers do not preclude efficient reinforcement learning for rich and interesting subclasses of POMDPs. In particular, we present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works. OOM-UCB achieves an optimal sample complexity of $\tilde{\mathcal{O}}(1/\varepsilon^2)$ for finding an $\varepsilon$-optimal policy, along with being polynomial in all other relevant quantities. As an interesting special case, we also provide a computationally and statistically efficient algorithm for POMDPs with deterministic state transitions.

Author Information

Chi Jin (Princeton University)
Sham Kakade (University of Washington & Microsoft Research)
Akshay Krishnamurthy (Microsoft)
Qinghua Liu (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors