Timezone: »
Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two important difficulties are (i) exploiting the dependencies between the hypotheses, (ii) exploiting the dependence between the algorithm’s input and output. Progress on the first point was made with the chaining method, originating from the work of Kolmogorov, and used in the VC-dimension bound. More recently, progress on the second point was made with the mutual information method by Russo and Zou ’15. Yet, these two methods are currently disjoint. In this paper, we introduce a technique to combine chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. As a corollary, we tighten Dudley’s inequality when the learning algorithm chooses its output from a small subset of hypotheses with high probability.
Author Information
Amir Asadi (Princeton University)
Emmanuel Abbe (Princeton University)
Sergio Verdu (Princeton University)
More from the Same Authors
-
2020 Poster: On the universality of deep learning »
Emmanuel Abbe · Colin Sandon -
2017 Poster: Nonbacktracking Bounds on the Influence in Independent Cascade Models »
Emmanuel Abbe · Sanjeev Kulkarni · Eun Jee Lee -
2016 Poster: Achieving the KS threshold in the general stochastic block model with linearized acyclic belief propagation »
Emmanuel Abbe · Colin Sandon -
2016 Oral: Achieving the KS threshold in the general stochastic block model with linearized acyclic belief propagation »
Emmanuel Abbe · Colin Sandon -
2015 Poster: Recovering Communities in the General Stochastic Block Model Without Knowing the Parameters »
Emmanuel Abbe · Colin Sandon -
2009 Invited Talk: Relative Entropy »
Sergio Verdu