Skip to yearly menu bar Skip to main content


Poster

Policy-Conditioned Uncertainty Sets for Robust Markov Decision Processes

Andrea Tirinzoni · Marek Petrik · Xiangli Chen · Brian Ziebart

Room 517 AB #168

Keywords: [ Decision and Control ]


Abstract:

What policy should be employed in a Markov decision process with uncertain parameters? Robust optimization answer to this question is to use rectangular uncertainty sets, which independently reflect available knowledge about each state, and then obtains a decision policy that maximizes expected reward for the worst-case decision process parameters from these uncertainty sets. While this rectangularity is convenient computationally and leads to tractable solutions, it often produces policies that are too conservative in practice, and does not facilitate knowledge transfer between portions of the state space or across related decision processes. In this work, we propose non-rectangular uncertainty sets that bound marginal moments of state-action features defined over entire trajectories through a decision process. This enables generalization to different portions of the state space while retaining appropriate uncertainty of the decision process. We develop algorithms for solving the resulting robust decision problems, which reduce to finding an optimal policy for a mixture of decision processes, and demonstrate the benefits of our approach experimentally.

Live content is unavailable. Log in and register to view live content