Policy learning “without” overlap: Pessimism and generalized empirical Bernstein’s inequality
Abstract
This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, that is, the propensities of exploring all actions for all individual characteristics must be lower bounded; the performance of the existing methods thus depends on the worst-case propensity in the offline data set. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities. In this paper, we propose Pessimistic Policy Learning (PPL), a new algorithm that optimizes lower confidence bounds (LCBs)—instead of point estimates—of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein’s inequality to unbounded and non-i.i.d. data. We complement our theory with an efficient optimization algorithm via Majorization–Minimization and policy tree search, as well as extensive simulation studies and real-world applications that demonstrate the efficacy of PPL.