Skip to yearly menu bar Skip to main content


Poster

rho-POMDPs have Lipschitz-Continuous epsilon-Optimal Value Functions

Mathieu Fehr · Olivier Buffet · Vincent Thomas · Jilles Dibangoye

Room 517 AB #165

Keywords: [ Reinforcement Learning ] [ Markov Decision Processes ] [ Planning ]


Abstract:

Many state-of-the-art algorithms for solving Partially Observable Markov Decision Processes (POMDPs) rely on turning the problem into a “fully observable” problem—a belief MDP—and exploiting the piece-wise linearity and convexity (PWLC) of the optimal value function in this new state space (the belief simplex ∆). This approach has been extended to solving ρ-POMDPs—i.e., for information-oriented criteria—when the reward ρ is convex in ∆. General ρ-POMDPs can also be turned into “fully observable” problems, but with no means to exploit the PWLC property. In this paper, we focus on POMDPs and ρ-POMDPs with λ ρ -Lipschitz reward function, and demonstrate that, for finite horizons, the optimal value function is Lipschitz-continuous. Then, value function approximators are proposed for both upper- and lower-bounding the optimal value function, which are shown to provide uniformly improvable bounds. This allows proposing two algorithms derived from HSVI which are empirically evaluated on various benchmark problems.

Live content is unavailable. Log in and register to view live content