Timezone: »

 
Poster
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Suriya Gunasekar · Jason Lee · Daniel Soudry · Nati Srebro

Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 210 #82
We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear SVM solution, regardless of depth.

Author Information

Suriya Gunasekar (TTI Chicago)
Jason Lee (University of Southern California)
Daniel Soudry (Technion)

I am an assistant professor in the Department of Electrical Engineering at the Technion, working in the areas of Machine learning and theoretical neuroscience. I am especially interested in all aspects of neural networks and deep learning. I did my post-doc (as a Gruss Lipper fellow) working with Prof. Liam Paninski in the Department of Statistics, the Center for Theoretical Neuroscience the Grossman Center for Statistics of the Mind, the Kavli Institute for Brain Science, and the NeuroTechnology Center at Columbia University. I did my Ph.D. (2008-2013, direct track) in the Network Biology Research Laboratory in the Department of Electrical Engineering at the Technion, Israel Institute of technology, under the guidance of Prof. Ron Meir. In 2008 I graduated summa cum laude with a B.Sc. in Electrical Engineering and a B.Sc. in Physics, after studying in the Technion since 2004.

Nati Srebro (TTI-Chicago)

More from the Same Authors