Timezone: »

Off-Policy Interval Estimation with Lipschitz Value Iteration
Ziyang Tang · Yihao Feng · Na Zhang · Jian Peng · Qiang Liu

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #164

Off-policy evaluation provides an essential tool for evaluating the effects of different policies or treatments using only observed data. When applied to high-stakes scenarios such as medical diagnosis or financial decision-making, it is essential to provide provably correct upper and lower bounds of the expected reward, not just a classical single point estimate, to the end-users, as executing a poor policy can be very costly. In this work, we propose a provably correct method for obtaining interval bounds for off-policy evaluation in a general continuous setting. The idea is to search for the maximum and minimum values of the expected reward among all the Lipschitz Q-functions that are consistent with the observations, which amounts to solving a constrained optimization problem on a Lipschitz function space. We go on to introduce a Lipschitz value iteration method to monotonically tighten the interval, which is simple yet efficient and provably convergent. We demonstrate the practical efficiency of our method on a range of benchmarks.

Author Information

Ziyang Tang (UT Austin)
Yihao Feng (UT Austin)

I am a Ph.D student at UT Austin, where I work on Reinforcement Learning and Approximate Inference. I am looking for internships for summer 2020! Please feel free to contact me (yihao AT cs.utexas.edu) if you have open positions!

Na Zhang (Tsinghua University)
Jian Peng (University of Illinois at Urbana-Champaign)
Qiang Liu (UT Austin)

More from the Same Authors