Timezone: »
One of the most commonly used methods for forming confidence intervals is the empirical bootstrap, which is especially expedient when the limiting distribution of the estimator is unknown. However, despite its ubiquitous role in machine learning, its theoretical properties are still not well understood. Recent developments in probability have provided new tools to study the bootstrap method. However, they have been applied only to specific applications and contexts, and it is unclear whether these techniques are applicable to the understanding of the consistency of the bootstrap in machine learning pipelines. In this paper, we derive general stability conditions under which the empirical bootstrap estimator is consistent and quantify the speed of convergence. Moreover, we propose alternative ways to use the bootstrap method to build confidence intervals with coverage guarantees. Finally, we illustrate the generality and tightness of our results by examples of interest for machine learning including for two-sample kernel tests after kernel selection and the empirical risk of stacked estimators.
Author Information
Morgane Austern (Harvard)
Vasilis Syrgkanis (Microsoft Research)
More from the Same Authors
-
2021 : Double/Debiased Machine Learning for Dynamic Treatment Effects via $g$-Estimation »
Greg Lewis · Vasilis Syrgkanis -
2021 : Estimating the Long-Term Effects of Novel Treatments »
Keith Battocchi · Maggie Hei · Greg Lewis · Miruna Oprescu · Vasilis Syrgkanis -
2022 Poster: Debiased Machine Learning without Sample-Splitting for Stable Estimators »
Qizhao Chen · Vasilis Syrgkanis · Morgane Austern -
2021 Poster: Double/Debiased Machine Learning for Dynamic Treatment Effects »
Greg Lewis · Vasilis Syrgkanis -
2021 Poster: Estimating the Long-Term Effects of Novel Treatments »
Keith Battocchi · Eleanor Dillon · Maggie Hei · Greg Lewis · Miruna Oprescu · Vasilis Syrgkanis -
2020 Poster: Minimax Estimation of Conditional Moment Models »
Nishanth Dikkala · Greg Lewis · Lester Mackey · Vasilis Syrgkanis -
2019 : Coffee break, posters, and 1-on-1 discussions »
Julius von Kügelgen · David Rohde · Candice Schumann · Grace Charles · Victor Veitch · Vira Semenova · Mert Demirer · Vasilis Syrgkanis · Suraj Nair · Aahlad Puli · Masatoshi Uehara · Aditya Gopalan · Yi Ding · Ignavier Ng · Khashayar Khosravi · Eli Sherman · Shuxi Zeng · Aleksander Wieczorek · Hao Liu · Kyra Gan · Jason Hartford · Miruna Oprescu · Alexander D'Amour · Jörn Boehnke · Yuta Saito · Théophile Griveau-Billion · Chirag Modi · Shyngys Karimov · Jeroen Berrevoets · Logan Graham · Imke Mayer · Dhanya Sridhar · Issa Dahabreh · Alan Mishler · Duncan Wadsworth · Khizar Qureshi · Rahul Ladhania · Gota Morishita · Paul Welle -
2019 Poster: Semi-Parametric Efficient Policy Learning with Continuous Actions »
Victor Chernozhukov · Mert Demirer · Greg Lewis · Vasilis Syrgkanis -
2019 Poster: Low-Rank Bandit Methods for High-Dimensional Dynamic Pricing »
Jonas Mueller · Vasilis Syrgkanis · Matt Taddy -
2019 Poster: Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments »
Vasilis Syrgkanis · Victor Lei · Miruna Oprescu · Maggie Hei · Keith Battocchi · Greg Lewis -
2019 Spotlight: Machine Learning Estimation of Heterogeneous Treatment Effects with Instruments »
Vasilis Syrgkanis · Victor Lei · Miruna Oprescu · Maggie Hei · Keith Battocchi · Greg Lewis -
2018 Workshop: Smooth Games Optimization and Machine Learning »
Simon Lacoste-Julien · Ioannis Mitliagkas · Gauthier Gidel · Vasilis Syrgkanis · Eva Tardos · Leon Bottou · Sebastian Nowozin -
2017 Workshop: Learning in the Presence of Strategic Behavior »
Nika Haghtalab · Yishay Mansour · Tim Roughgarden · Vasilis Syrgkanis · Jennifer Wortman Vaughan -
2017 Poster: Welfare Guarantees from Data »
Darrell Hoy · Denis Nekipelov · Vasilis Syrgkanis -
2017 Poster: Robust Optimization for Non-Convex Objectives »
Robert S Chen · Brendan Lucier · Yaron Singer · Vasilis Syrgkanis -
2017 Poster: A Sample Complexity Measure with Applications to Learning Optimal Auctions »
Vasilis Syrgkanis -
2017 Oral: Robust Optimization for Non-Convex Objectives »
Robert S Chen · Brendan Lucier · Yaron Singer · Vasilis Syrgkanis -
2016 Poster: Improved Regret Bounds for Oracle-Based Adversarial Contextual Bandits »
Vasilis Syrgkanis · Haipeng Luo · Akshay Krishnamurthy · Robert Schapire -
2015 Poster: No-Regret Learning in Bayesian Games »
Jason Hartline · Vasilis Syrgkanis · Eva Tardos -
2015 Poster: Fast Convergence of Regularized Learning in Games »
Vasilis Syrgkanis · Alekh Agarwal · Haipeng Luo · Robert Schapire -
2015 Oral: Fast Convergence of Regularized Learning in Games »
Vasilis Syrgkanis · Alekh Agarwal · Haipeng Luo · Robert Schapire