Timezone: »

Uncertainty-Aware Instance Reweighting for Off-Policy Learning
Xiaoying Zhang · Junpu Chen · Hongning Wang · Hong Xie · Yang Liu · John C.S. Lui · Hang Li

Tue Dec 12 03:15 PM -- 05:15 PM (PST) @ Great Hall & Hall B1+B2 #1310

Off-policy learning, referring to the procedure of policy optimization with access only to logged feedback data, has shown importance in various important real-world applications, such as search engines and recommender systems. While the ground-truth logging policy is usually unknown, previous work simply takes its estimated value for the off-policy learning, ignoring the negative impact from both high bias and high variance resulted from such an estimator. And these impact is often magnified on samples with small and inaccurately estimated logging probabilities. The contribution of this work is to explicitly model the uncertainty in the estimated logging policy, and propose an Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning, with a theoretical convergence guarantee. Experiment results on the synthetic and real-world recommendation datasets demonstrate that UIPS significantly improves the quality of the discovered policy, when compared against an extensive list of state-of-the-art baselines.

Author Information

Xiaoying Zhang (ByteDance Research)
Junpu Chen (ChongQing University)
Hongning Wang (Tsinghua University)
Hong Xie (Chongqing Univeristy)
Yang Liu (UC Santa Cruz/ByteDance Research)
John C.S. Lui (Chinese University of Hong Kong)
Hang Li (Bytedance Technology)

More from the Same Authors