Timezone: »
It is known that learning deep neural-networks is computationally hard in the worst-case. In fact, the proofs of such hardness results show that even weakly learning deep networks is hard. In other words, no efficient algorithm can find a predictor that is slightly better than a random guess. However, we observe that on natural distributions of images, small patches of the input image are corre- lated to the target label, which implies that on such natural data, efficient weak learning is trivial. While in the distribution-free setting, the celebrated boosting results show that weak learning implies strong learning, in the distribution-specific setting this is not necessarily the case. We introduce a property of distributions, denoted “local correlation”, which requires that small patches of the input image and of intermediate layers of the target function are correlated to the target label. We empirically demonstrate that this property holds for the CIFAR and ImageNet data sets. The main technical results of the paper is proving that, for some classes of deep functions, weak learning implies efficient strong learning under the “local correlation” assumption.
Author Information
Eran Malach (Hebrew University Jerusalem Israel)
Shai Shalev-Shwartz (Mobileye & HUJI)
More from the Same Authors
-
2021 Spotlight: On the Power of Differentiable Learning versus PAC and SQ Learning »
Emmanuel Abbe · Pritish Kamath · Eran Malach · Colin Sandon · Nathan Srebro -
2023 Poster: Resource Tradeoffs for Deep Feature Learning: Data, Compute, Width, and Luck »
Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang -
2022 Poster: Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit »
Boaz Barak · Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang -
2022 Poster: Knowledge Distillation: Bad Models Can Be Good Role Models »
Gal Kaplun · Eran Malach · Preetum Nakkiran · Shai Shalev-Shwartz -
2021 : Q&A with Shai Shalev-Shwartz »
Shai Shalev-Shwartz -
2021 : Deep Learning: Success, Failure, and the Border between them, Shai Shalev-Shwartz »
Shai Shalev-Shwartz -
2021 Poster: On the Power of Differentiable Learning versus PAC and SQ Learning »
Emmanuel Abbe · Pritish Kamath · Eran Malach · Colin Sandon · Nathan Srebro -
2020 Poster: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach -
2020 Oral: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach -
2019 Poster: Is Deeper Better only when Shallow is Good? »
Eran Malach · Shai Shalev-Shwartz -
2017 Poster: Decoupling "when to update" from "how to update" »
Eran Malach · Shai Shalev-Shwartz -
2016 Poster: Learning a Metric Embedding for Face Recognition using the Multibatch Method »
Oren Tadmor · Tal Rosenwein · Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2015 Poster: Beyond Convexity: Stochastic Quasi-Convex Optimization »
Elad Hazan · Kfir Y. Levy · Shai Shalev-Shwartz -
2014 Poster: On the Computational Efficiency of Training Neural Networks »
Roi Livni · Shai Shalev-Shwartz · Ohad Shamir -
2013 Poster: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai Shalev-Shwartz -
2013 Spotlight: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai Shalev-Shwartz -
2013 Poster: Accelerated Mini-Batch Stochastic Dual Coordinate Ascent »
Shai Shalev-Shwartz · Tong Zhang -
2012 Poster: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai Shalev-Shwartz -
2012 Spotlight: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai Shalev-Shwartz -
2012 Poster: Learning Halfspaces with the Zero-One Loss: Time-Accuracy Tradeoffs »
Aharon Birnbaum · Shai Shalev-Shwartz -
2011 Poster: ShareBoost: Efficient multiclass learning with feature sharing »
Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2011 Session: Spotlight Session 4 »
Shai Shalev-Shwartz -
2011 Session: Oral Session 4 »
Shai Shalev-Shwartz -
2008 Poster: Fast Rates for Regularized Objectives »
Karthik Sridharan · Shai Shalev-Shwartz · Nati Srebro -
2008 Poster: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai Shalev-Shwartz · Sham M Kakade -
2008 Spotlight: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai Shalev-Shwartz · Sham M Kakade -
2006 Poster: Online Classification for Complex Problems Using Simultaneous Projections »
Yonatan Amit · Shai Shalev-Shwartz · Yoram Singer -
2006 Poster: Convex Repeated Games and Fenchel Duality »
Shai Shalev-Shwartz · Yoram Singer -
2006 Spotlight: Convex Repeated Games and Fenchel Duality »
Shai Shalev-Shwartz · Yoram Singer