Timezone: »

 
Poster
Markovian Interference in Experiments
Vivek Farias · Andrew Li · Tianyi Peng · Andrew Zheng

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #330

We consider experiments in dynamical systems where interventions on some experimental units impact other units through a limiting constraint (such as a limited supply of products). Despite outsize practical importance, the best estimators for this `Markovian' interference problem are largely heuristic in nature, and their bias is not well understood. We formalize the problem of inference in such experiments as one of policy evaluation. Off-policy estimators, while unbiased, apparently incur a large penalty in variance relative to state-of-the-art heuristics. We introduce an on-policy estimator: the Differences-In-Q's (DQ) estimator. We show that the DQ estimator can in general have exponentially smaller variance than off-policy evaluation. At the same time, its bias is second order in the impact of the intervention. This yields a striking bias-variance tradeoff so that the DQ estimator effectively dominates state-of-the-art alternatives. From a theoretical perspective, we introduce three separate novel techniques that are of independent interest in the theory of Reinforcement Learning (RL). Our empirical evaluation includes a set of experiments on a city-scale ride-hailing simulator.

Author Information

Vivek Farias (Massachusetts Institute of Technology)
Andrew Li (Carnegie Mellon University)
Tianyi Peng (Massachusetts Institute of Technology)
Andrew Zheng (Massachusetts Institute of Technology)

More from the Same Authors