Timezone: »

Chaining Mutual Information and Tightening Generalization Bounds
Amir Asadi · Emmanuel Abbe · Sergio Verdu

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #102

Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two important difficulties are (i) exploiting the dependencies between the hypotheses, (ii) exploiting the dependence between the algorithm’s input and output. Progress on the first point was made with the chaining method, originating from the work of Kolmogorov, and used in the VC-dimension bound. More recently, progress on the second point was made with the mutual information method by Russo and Zou ’15. Yet, these two methods are currently disjoint. In this paper, we introduce a technique to combine chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. As a corollary, we tighten Dudley’s inequality when the learning algorithm chooses its output from a small subset of hypotheses with high probability.

Author Information

Amir Asadi (Princeton University)
Emmanuel Abbe (Princeton University)
Sergio Verdu (Princeton University)

More from the Same Authors