Timezone: »

Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies
Nathan Kallus · Masatoshi Uehara

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #201

Offline reinforcement learning, wherein one uses off-policy data logged by a fixed behavior policy to evaluate and learn new policies, is crucial in applications where experimentation is limited such as medicine. We study the estimation of policy value and gradient of a deterministic policy from off-policy data when actions are continuous. Targeting deterministic policies, for which action is a deterministic function of state, is crucial since optimal policies are always deterministic (up to ties). In this setting, standard importance sampling and doubly robust estimators for policy value and gradient fail because the density ratio does not exist. To circumvent this issue, we propose several new doubly robust estimators based on different kernelization approaches. We analyze the asymptotic mean-squared error of each of these under mild rate conditions for nuisance estimators. Specifically, we demonstrate how to obtain a rate that is independent of the horizon length.

Author Information

Nathan Kallus (Cornell University)
Masatoshi Uehara (Cornell University)

More from the Same Authors