Timezone: »

 
Poster
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak · Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang

Thu Dec 01 09:30 AM -- 11:00 AM (PST) @ Hall J #806
There is mounting empirical evidence of emergent phenomena in the capabilities of deep learning methods as we scale up datasets, model sizes, and training times. While there are some accounts of how these resources modulate statistical capacity, far less is known about their effect on the computational problem of model training. This work conducts such an exploration through the lens of learning $k$-sparse parities of $n$ bits, a canonical family of problems which pose theoretical computational barriers. In this setting, we find that neural networks exhibit surprising phase transitions when scaling up dataset size and running time. In particular, we demonstrate empirically that with standard training, a variety of architectures learn sparse parities with $n^{O(k)}$ examples, with loss (and error) curves abruptly dropping after $n^{O(k)}$ iterations. These positive results nearly match known SQ lower bounds, even without an explicit sparsity-promoting prior. We elucidate the mechanisms of these phenomena with a theoretical analysis: we find that the phase transition in performance is not due to SGD "stumbling in the dark" until it finds the hidden set of features; instead, we show that SGD gradually amplifies a Fourier gap in the population gradient.

Author Information

Boaz Barak (Harvard University)
Benjamin Edelman (Harvard University)
Surbhi Goel (Microsoft Research NYC)
Sham Kakade (Harvard University & Microsoft Research)
Eran Malach (Hebrew University Jerusalem Israel)
Cyril Zhang (Microsoft Research NYC)

More from the Same Authors