Skip to yearly menu bar Skip to main content


Poster

Learning in Observable POMDPs, without Computationally Intractable Oracles

Noah Golowich · Ankur Moitra · Dhruv Rohatgi

Hall J (level 1) #330

Keywords: [ barycentric spanner ] [ policy cover ] [ Partially-observable Markov Decision Processes ]


Abstract:

Much of reinforcement learning theory is built on top of oracles that are computationally hard to implement. Specifically for learning near-optimal policies in Partially Observable Markov Decision Processes (POMDPs), existing algorithms either need to make strong assumptions about the model dynamics (e.g. deterministic transitions) or assume access to an oracle for solving a hard optimistic planning or estimation problem as a subroutine. In this work we develop the first oracle-free learning algorithm for POMDPs under reasonable assumptions. Specifically, we give a quasipolynomial-time end-to-end algorithm for learning in ``observable'' POMDPs, where observability is the assumption that well-separated distributions over states induce well-separated distributions over observations. Our techniques circumvent the more traditional approach of using the principle of optimism under uncertainty to promote exploration, and instead give a novel application of barycentric spanners to constructing policy covers.

Chat is not available.