Timezone: »

Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks
Spencer Frei · Yuan Cao · Quanquan Gu

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #195

The skip-connections used in residual networks have become a standard architecture choice in deep learning due to the increased generalization and stability of networks with this architecture, although there have been limited theoretical guarantees for this improved performance. In this work, we analyze overparameterized deep residual networks trained by gradient descent following random initialization, and demonstrate that (i) the class of networks learned by gradient descent constitutes a small subset of the entire neural network function class, and (ii) this subclass of networks is sufficiently large to guarantee small training error. By showing (i) we are able to demonstrate that deep residual networks trained with gradient descent have a small generalization gap between training and test error, and together with (ii) this guarantees that the test error will be small. Our optimization and generalization guarantees require overparameterization that is only logarithmic in the depth of the network, which helps explain why residual networks are preferable to fully connected ones.

Author Information

Spencer Frei (UCLA)
Yuan Cao (UCLA)
Quanquan Gu (UCLA)

More from the Same Authors