Skip to yearly menu bar Skip to main content


Poster

SGD Learns the Conjugate Kernel Class of the Network

Amit Daniely

Pacific Ballroom #209

Keywords: [ Large Margin Methods ] [ Theory ] [ Learning Theory ] [ Kernel Methods ]


Abstract:

We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely, Frostig and Singer. The result holds for log-depth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results.

Live content is unavailable. Log in and register to view live content