`

Timezone: »

 
Poster
What is being transferred in transfer learning?
Behnam Neyshabur · Hanie Sedghi · Chiyuan Zhang

Tue Dec 08 09:00 PM -- 11:00 PM (PST) @ Poster Session 2 #725

One desired capability for machines is the ability to transfer their understanding of one domain to another domain where data is (usually) scarce. Despite ample adaptation of transfer learning in many deep learning applications, we yet do not understand what enables a successful transfer and which part of the network is responsible for that. In this paper, we provide new tools and analysis to address these fundamental questions. Through a series of analysis on transferring to block-shuffled images, we separate the effect of feature reuse from learning high-level statistics of data and show that some benefit of transfer learning comes from the latter. We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.

Author Information

Behnam Neyshabur (Google)

I am a staff research scientist at Google. Before that, I was a postdoctoral researcher at New York University and a member of Theoretical Machine Learning program at Institute for Advanced Study (IAS) in Princeton. In summer 2017, I received a PhD in computer science at TTI-Chicago where I was fortunate to be advised by Nati Srebro.

Hanie Sedghi (Google Brain)
Chiyuan Zhang (Google Brain)

More from the Same Authors