Timezone: »
We present a new approach to the problems of evaluating and learning personalized decision policies from observational data of past contexts, decisions, and outcomes. Only the outcome of the enacted decision is available and the historical policy is unknown. These problems arise in personalized medicine using electronic health records and in internet advertising. Existing approaches use inverse propensity weighting (or, doubly robust versions) to make historical outcome (or, residual) data look like it were generated by a new policy being evaluated or learned. But this relies on a plug-in approach that rejects data points with a decision that disagrees with the new policy, leading to high variance estimates and ineffective learning. We propose a new, balance-based approach that too makes the data look like the new policy but does so directly by finding weights that optimize for balance between the weighted data and the target policy in the given, finite sample, which is equivalent to minimizing worst-case or posterior conditional mean square error. Our policy learner proceeds as a two-level optimization problem over policies and weights. We demonstrate that this approach markedly outperforms existing ones both in evaluation and learning, which is unsurprising given the wider support of balance-based weights. We establish extensive theoretical consistency guarantees and regret bounds that support this empirical success.
Author Information
Nathan Kallus (Cornell University)
More from the Same Authors
-
2022 Panel: Panel 3C-5: Biologically-Plausible Determinant Maximization… & What's the Harm? ... »
Bariscan Bozkurt · Nathan Kallus -
2022 Poster: Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems »
Masatoshi Uehara · Ayush Sekhari · Jason Lee · Nathan Kallus · Wen Sun -
2022 Poster: The Implicit Delta Method »
Nathan Kallus · James McInerney -
2022 Poster: What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment »
Nathan Kallus -
2021 Workshop: Causal Inference Challenges in Sequential Decision Making: Bridging Theory and Practice »
Aurelien Bibaut · Maria Dimakopoulou · Nathan Kallus · Xinkun Nie · Masatoshi Uehara · Kelly Zhang -
2021 Poster: Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning »
Aurelien Bibaut · Nathan Kallus · Maria Dimakopoulou · Antoine Chambaz · Mark van der Laan -
2021 Poster: Control Variates for Slate Off-Policy Evaluation »
Nikos Vlassis · Ashok Chandrashekar · Fernando Amat · Nathan Kallus -
2021 Poster: Post-Contextual-Bandit Inference »
Aurelien Bibaut · Maria Dimakopoulou · Nathan Kallus · Antoine Chambaz · Mark van der Laan -
2020 Workshop: Consequential Decisions in Dynamic Environments »
Niki Kilbertus · Angela Zhou · Ashia Wilson · John Miller · Lily Hu · Lydia T. Liu · Nathan Kallus · Shira Mitchell -
2020 : Spotlight Talk 4: Fairness, Welfare, and Equity in Personalized Pricing »
Nathan Kallus · Angela Zhou -
2020 Poster: Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning »
Nathan Kallus · Angela Zhou -
2020 Poster: Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies »
Nathan Kallus · Masatoshi Uehara -
2019 : Coffee Break and Poster Session »
Rameswar Panda · Prasanna Sattigeri · Kush Varshney · Karthikeyan Natesan Ramamurthy · Harvineet Singh · Vishwali Mhasawade · Shalmali Joshi · Laleh Seyyed-Kalantari · Matthew McDermott · Gal Yona · James Atwood · Hansa Srinivasan · Yonatan Halpern · D. Sculley · Behrouz Babaki · Margarida Carvalho · Josie Williams · Narges Razavian · Haoran Zhang · Amy Lu · Irene Y Chen · Xiaojie Mao · Angela Zhou · Nathan Kallus -
2019 : Opening Remarks »
Thorsten Joachims · Nathan Kallus · Michele Santacatterina · Adith Swaminathan · David Sontag · Angela Zhou -
2019 Workshop: “Do the right thing”: machine learning and causal inference for improved decision making »
Michele Santacatterina · Thorsten Joachims · Nathan Kallus · Adith Swaminathan · David Sontag · Angela Zhou -
2019 : Nathan Kallus: Efficiently Breaking the Curse of Horizon with Double Reinforcement Learning »
Nathan Kallus -
2019 Poster: The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric »
Nathan Kallus · Angela Zhou -
2019 Poster: Assessing Disparate Impact of Personalized Interventions: Identifiability and Bounds »
Nathan Kallus · Angela Zhou -
2019 Poster: Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning »
Nathan Kallus · Masatoshi Uehara -
2019 Poster: Policy Evaluation with Latent Confounders via Optimal Balance »
Andrew Bennett · Nathan Kallus -
2019 Poster: Deep Generalized Method of Moments for Instrumental Variable Analysis »
Andrew Bennett · Nathan Kallus · Tobias Schnabel -
2018 Workshop: Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy »
Manuela Veloso · Nathan Kallus · Sameena Shah · Senthil Kumar · Isabelle Moulinier · Jiahao Chen · John Paisley -
2018 Poster: Causal Inference with Noisy and Missing Covariates via Matrix Factorization »
Nathan Kallus · Xiaojie Mao · Madeleine Udell -
2018 Poster: Removing Hidden Confounding by Experimental Grounding »
Nathan Kallus · Aahlad Puli · Uri Shalit -
2018 Spotlight: Removing Hidden Confounding by Experimental Grounding »
Nathan Kallus · Aahlad Puli · Uri Shalit -
2018 Poster: Confounding-Robust Policy Improvement »
Nathan Kallus · Angela Zhou -
2017 Workshop: From 'What If?' To 'What Next?' : Causal Inference and Machine Learning for Intelligent Decision Making »
Ricardo Silva · Panagiotis Toulis · John Shawe-Taylor · Alexander Volfovsky · Thorsten Joachims · Lihong Li · Nathan Kallus · Adith Swaminathan