Timezone: »

 
Poster
Do Deep Nets Really Need to be Deep?
Jimmy Ba · Rich Caruana

Wed Dec 10 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D

Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this paper we empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow nets can learn these deep functions using the same number of parameters as the original deep models. On the TIMIT phoneme recognition and CIFAR-10 image recognition tasks, shallow nets can be trained that perform similarly to complex, well-engineered, deeper convolutional models.

Author Information

Jimmy Ba (University of Toronto)
Rich Caruana (Microsoft)

More from the Same Authors