Timezone: »

 
Poster
Safe Policy Improvement by Minimizing Robust Baseline Regret
Mohammad Ghavamzadeh · Marek Petrik · Yinlam Chow

Wed Dec 07 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #81

An important problem in sequential decision-making under uncertainty is to use limited data to compute a safe policy, i.e., a policy that is guaranteed to perform at least as well as a given baseline strategy. In this paper, we develop and analyze a new model-based approach to compute a safe policy when we have access to an inaccurate dynamics model of the system with known accuracy guarantees. Our proposed robust method uses this (inaccurate) model to directly minimize the (negative) regret w.r.t. the baseline policy. Contrary to the existing approaches, minimizing the regret allows one to improve the baseline policy in states with accurate dynamics and seamlessly fall back to the baseline policy, otherwise. We show that our formulation is NP-hard and propose an approximate algorithm. Our empirical results on several domains show that even this relatively simple approximate algorithm can significantly outperform standard approaches.

Author Information

Mohammad Ghavamzadeh (Facebook AI Research)
Marek Petrik (University of New Hampshire)
Yinlam Chow (Stanford University)

More from the Same Authors