Poster
Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling
Tengyang Xie · Yifei Ma · Yu-Xiang Wang
East Exhibition Hall B, C #208
Keywords: [ Learning Theory ] [ Theory ] [ Reinforcement Learning ] [ Reinforcement Learning and Planning ]
[
Abstract
]
Abstract:
Motivated by the many real-world applications of reinforcement learning (RL) that require safe-policy iterations, we consider the problem of off-policy evaluation (OPE) --- the problem of evaluating a new policy using the historical data obtained by different behavior policies --- under the model of nonstationary episodic Markov Decision Processes (MDP) with a long horizon and a large action space. Existing importance sampling (IS) methods often suffer from large variance that depends exponentially on the RL horizon . To solve this problem, we consider a marginalized importance sampling (MIS) estimator that recursively estimates the state marginal distribution for the target policy at every step.
MIS achieves a mean-squared error of
\frac{1}{n} \sum\nolimits_{t=1}^H\mathbb{E}_{\mu}\left[\frac{d_t^\pi(s_t)^2}{d_t^\mu(s_t)^2} \mathrm{Var}_{\mu}\left[\frac{\pi_t(a_t|s_t)}{\mu_t(a_t|s_t)}\big( V_{t+1}^\pi(s_{t+1}) + r_t\big) \middle| s_t\right]\right] + \tilde{O}(n^{-1.5})
where and are the logging and target policies, and are the marginal distribution of the state at th step, is the horizon, is the sample size and is the value function of the MDP under . The result matches the Cramer-Rao lower bound in [Jiang and Li, 2016] up to a multiplicative factor of . To the best of our knowledge, this is the first OPE estimation error bound with a polynomial dependence on . Besides theory, we show empirical superiority of our method in time-varying, partially observable, and long-horizon RL environments.
Live content is unavailable. Log in and register to view live content