Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe and Robust Control of Uncertain Systems

Behavior Policy Search for Risk Estimators in Reinforcement Learning

Elita Lobo · Marek Petrik · Dharmashankar Subramanian


Abstract:

In real-world sequential decision problems, exploration is expensive, and the risk1of expert decision policies must be evaluated from limited data. In this setting, Monte Carlo (MC) risk estimators are typically used to estimate the risks associated with decision policies. While these estimators have the desired low bias property, they often suffer from large variance. In this paper, we consider the problem of minimizing the asymptotic mean squared error and hence variance of MC risk estimators. We show that by carefully choosing the data sampling policy (behavior policy), we can obtain low variance estimates of the risk of any given decision policy.