Skip to yearly menu bar Skip to main content


Poster

Safe Policy Improvement by Minimizing Robust Baseline Regret

Mohammad Ghavamzadeh · Marek Petrik · Yinlam Chow

Area 5+6+7+8 #81

Keywords: [ Reinforcement Learning Algorithms ]


Abstract:

An important problem in sequential decision-making under uncertainty is to use limited data to compute a safe policy, i.e., a policy that is guaranteed to perform at least as well as a given baseline strategy. In this paper, we develop and analyze a new model-based approach to compute a safe policy when we have access to an inaccurate dynamics model of the system with known accuracy guarantees. Our proposed robust method uses this (inaccurate) model to directly minimize the (negative) regret w.r.t. the baseline policy. Contrary to the existing approaches, minimizing the regret allows one to improve the baseline policy in states with accurate dynamics and seamlessly fall back to the baseline policy, otherwise. We show that our formulation is NP-hard and propose an approximate algorithm. Our empirical results on several domains show that even this relatively simple approximate algorithm can significantly outperform standard approaches.

Live content is unavailable. Log in and register to view live content