Skip to yearly menu bar Skip to main content


Poster

Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning

Nathan Kallus · Masatoshi Uehara

East Exhibition Hall B + C #209

Keywords: [ Causal Inference ] [ Probabilistic Methods ] [ Reinforcement Learning and Planning ] [ Reinforcement Learning ]


Abstract:

Off-policy evaluation (OPE) in both contextual bandits and reinforcement learning allows one to evaluate novel decision policies without needing to conduct exploration, which is often costly or otherwise infeasible. The problem's importance has attracted many proposed solutions, including importance sampling (IS), self-normalized IS (SNIS), and doubly robust (DR) estimates. DR and its variants ensure semiparametric local efficiency if Q-functions are well-specified, but if they are not they can be worse than both IS and SNIS. It also does not enjoy SNIS's inherent stability and boundedness. We propose new estimators for OPE based on empirical likelihood that are always more efficient than IS, SNIS, and DR and satisfy the same stability and boundedness properties as SNIS. On the way, we categorize various properties and classify existing estimators by them. Besides the theoretical guarantees, empirical studies suggest the new estimators provide advantages.

Live content is unavailable. Log in and register to view live content