Poster
Generalization Bounds for Uniformly Stable Algorithms
Vitaly Feldman · Jan Vondrak
Room 210 #85
Keywords: [ Learning Theory ]
[
Abstract
]
Abstract:
Uniform stability of a learning algorithm is a classical notion of algorithmic stability introduced to derive high-probability bounds on the generalization error (Bousquet and Elisseeff, 2002). Specifically, for a loss function with range bounded in [0,1], the generalization error of γ-uniformly stable learning algorithm on n samples is known to be at most O((γ+1/n)√nlog(1/δ)) with probability at least 1−δ. Unfortunately, this bound does not lead to meaningful generalization bounds in many common settings where γ≥1/√n. At the same time the bound is known to be tight only when γ=O(1/n).
Here we prove substantially stronger generalization bounds for uniformly stable algorithms without any additional assumptions. First, we show that the generalization error in this setting is at most O(√(γ+1/n)log(1/δ)) with probability at least 1−δ. In addition, we prove a tight bound of O(γ2+1/n) on the second moment of the generalization error. The best previous bound on the second moment of the generalization error is O(γ+1/n). Our proofs are based on new analysis techniques and our results imply substantially stronger generalization guarantees for several well-studied algorithms.
Live content is unavailable. Log in and register to view live content