Skip to yearly menu bar Skip to main content


Poster

Sampling Networks and Aggregate Simulation for Online POMDP Planning

Hao(Jackson) Cui · Roni Khardon

East Exhibition Hall B + C #205

Keywords: [ Markov Decision Proc ] [ Probabilistic Methods; Probabilistic Methods -> Belief Propagation; Reinforcement Learning and Planning ] [ Reinforcement Learning and Planning ]


Abstract:

The paper introduces a new algorithm for planning in partially observable Markov decision processes (POMDP) based on the idea of aggregate simulation. The algorithm uses product distributions to approximate the belief state and shows how to build a representation graph of an approximate action-value function over belief space. The graph captures the result of simulating the model in aggregate under independence assumptions, giving a symbolic representation of the value function. The algorithm supports large observation spaces using sampling networks, a representation of the process of sampling values of observations, which is integrated into the graph representation. Following previous work in MDPs this approach enables action selection in POMDPs through gradient optimization over the graph representation. This approach complements recent algorithms for POMDPs which are based on particle representations of belief states and an explicit search for action selection. Our approach enables scaling to large factored action spaces in addition to large state spaces and observation spaces. An experimental evaluation demonstrates that the algorithm provides excellent performance relative to state of the art in large POMDP problems.

Live content is unavailable. Log in and register to view live content