Timezone: »
Deep neural nets typically perform end-to-end backpropagation to learn the weights, a procedure that creates synchronization constraints in the weight update step across layers and is not biologically plausible. Recent advances in unsupervised contrastive representation learning invite the question of whether a learning algorithm can also be made local, that is, the updates of lower layers do not directly depend on the computation of upper layers. While Greedy InfoMax separately learns each block with a local objective, we found that it consistently hurts readout accuracy in state-of-the-art unsupervised contrastive learning algorithms, possibly due to the greedy objective as well as gradient isolation. In this work, we discover that by overlapping local blocks stacking on top of each other, we effectively increase the decoder depth and allow upper blocks to implicitly send feedbacks to lower blocks. This simple design closes the performance gap between local learning and end-to-end contrastive learning algorithms for the first time. Aside from standard ImageNet experiments, we also show results on complex downstream tasks such as object detection and instance segmentation directly using readout features.
Author Information
Yuwen Xiong (Uber ATG / University of Toronto)
Mengye Ren (University of Toronto / Uber ATG)
Raquel Urtasun (Uber ATG)
More from the Same Authors
-
2020 : Exploring Representation Learning for Flexible Few-Shot Tasks »
Mengye Ren -
2022 : Neural Network Online Training with Sensitivity to Multiscale Temporal Structure »
Matt Jones · Tyler Scott · Gamaleldin Elsayed · Mengye Ren · Katherine Hermann · David Mayo · Michael Mozer -
2022 : Learning to Reason With Relational Abstractions »
Andrew Nam · James McClelland · Mengye Ren · Chelsea Finn -
2020 Poster: MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models »
Sourav Biswas · Jerry Liu · Kelvin Wong · Shenlong Wang · Raquel Urtasun -
2019 : Coffee Break & Poster Session 1 »
Yan Zhang · Jonathon Hare · Adam Prugel-Bennett · Po Leung · Patrick Flaherty · Pitchaya Wiratchotisatian · Alessandro Epasto · Silvio Lattanzi · Sergei Vassilvitskii · Morteza Zadimoghaddam · Theja Tulabandhula · Fabian Fuchs · Adam Kosiorek · Ingmar Posner · William Hang · Anna Goldie · Sujith Ravi · Azalia Mirhoseini · Yuwen Xiong · Mengye Ren · Renjie Liao · Raquel Urtasun · Haici Zhang · Michele Borassi · Shengda Luo · Andrew Trapp · Geoffroy Dubourg-Felonneau · Yasmeen Kussad · Christopher Bender · Manzil Zaheer · Junier Oliva · Michał Stypułkowski · Maciej Zieba · Austin Dill · Chun-Liang Li · Songwei Ge · Eunsu Kang · Oiwi Parker Jones · Kelvin Ka Wing Wong · Joshua Payne · Yang Li · Azade Nazi · Erkut Erdem · Aykut Erdem · Kevin O'Connor · Juan J Garcia · Maciej Zamorski · Jan Chorowski · Deeksha Sinha · Harry Clifford · John W Cassidy -
2019 Poster: Incremental Few-Shot Learning with Attention Attractor Networks »
Mengye Ren · Renjie Liao · Ethan Fetaya · Richard Zemel -
2018 : Incremental Few-Shot Learning with Attention Attractor Networks »
Mengye Ren -
2017 Poster: The Reversible Residual Network: Backpropagation Without Storing Activations »
Aidan Gomez · Mengye Ren · Raquel Urtasun · Roger Grosse -
2015 Poster: Exploring Models and Data for Image Question Answering »
Mengye Ren · Jamie Kiros · Richard Zemel