Timezone: »
Poster
Hardness of Learning Neural Networks with Natural Weights
Amit Daniely · Gal Vardi
Neural networks are nowadays highly successful despite strong hardness results. The existing hardness results focus on the network architecture, and assume that the network's weights are arbitrary.
A natural approach to settle the discrepancy is to assume that the network's weights are ``wellbehaved" and posses some generic properties that may allow efficient learning. This approach is supported by the intuition that the weights in realworld networks are not arbitrary, but exhibit some ''randomlike" properties with respect to some ''natural" distributions.
We prove negative results in this regard, and show that for depth$2$ networks, and many ``natural" weights distributions such as the normal and the uniform distribution, most networks are hard to learn. Namely, there is no efficient learning algorithm that is provably successful for most weights, and every input distribution. It implies that there is no generic property that holds with high probability in such random networks and allows efficient learning.
Author Information
Amit Daniely (Hebrew University and Google Research)
Gal Vardi (Weizmann Institute of Science)
More from the Same Authors

2022 : On Convexity and Linear Mode Connectivity in Neural Networks »
David Yunis · Kumar Kshitij Patel · Pedro Savarese · Gal Vardi · Jonathan Frankle · Matthew Walter · Karen Livescu · Michael Maire 
2022 Panel: Panel 1C2: Reconstructing Training Data… & On Optimal Learning… »
Gal Vardi · Idan Mehalel 
2022 Poster: On Margin Maximization in Linear and ReLU Networks »
Gal Vardi · Ohad Shamir · Nati Srebro 
2022 Poster: The Sample Complexity of OneHiddenLayer Neural Networks »
Gal Vardi · Ohad Shamir · Nati Srebro 
2022 Poster: On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias »
Itay Safran · Gal Vardi · Jason Lee 
2022 Poster: Reconstructing Training Data From Trained Neural Networks »
Niv Haim · Gal Vardi · Gilad Yehudai · Ohad Shamir · Michal Irani 
2022 Poster: Gradient Methods Provably Converge to NonRobust Networks »
Gal Vardi · Gilad Yehudai · Ohad Shamir 
2021 Poster: Learning a Single Neuron with Bias Using Gradient Descent »
Gal Vardi · Gilad Yehudai · Ohad Shamir 
2020 Poster: Neural Networks Learning and Memorization with (almost) no OverParameterization »
Amit Daniely 
2020 Poster: Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations »
Amit Daniely · Hadas Shacham 
2020 Poster: Neural Networks with Small Weights and DepthSeparation Barriers »
Gal Vardi · Ohad Shamir 
2020 Spotlight: Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations »
Amit Daniely · Hadas Shacham 
2020 Poster: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach 
2020 Oral: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach 
2019 Poster: Locally Private Learning without Interaction Requires Separation »
Amit Daniely · Vitaly Feldman 
2019 Poster: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot 
2019 Spotlight: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot 
2017 Poster: SGD Learns the Conjugate Kernel Class of the Network »
Amit Daniely 
2016 Poster: Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity »
Amit Daniely · Roy Frostig · Yoram Singer 
2013 Poster: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai ShalevShwartz 
2013 Spotlight: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai ShalevShwartz 
2012 Poster: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai ShalevShwartz 
2012 Spotlight: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai ShalevShwartz