Timezone: »

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen · Aravind Gollakota · Adam Klivans · Raghu Meka

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #811

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting: previous SQ lower bounds held only for adversarial noise models (agnostic learning) (Kothari and Klivans 2014, Goel et al. 2020a, Diakonikolas et al. 2020a) or restricted models such as correlational SQ (Goel et al. 2020b, Diakonikolas et al. 2020b). Prior work hinted at the impossibility of our result: Vempala and Wilmes (2019) showed that general SQ lower bounds cannot apply to any real-valued family of functions that satisfies a simple non-degeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi (2021) that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many well-studied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning two-hidden-layer ReLU networks, as well as new lower bounds for learning constant-depth ReLU networks from membership queries.

Author Information

Sitan Chen (University of California Berkeley)
Aravind Gollakota (University of Texas at Austin)
Adam Klivans (UT Austin)
Raghu Meka (UCLA)

More from the Same Authors