Timezone: »
Poster
A single gradient step finds adversarial examples on random two-layers neural networks
Sebastien Bubeck · Yeshwanth Cherapanamjeri · Gauthier Gidel · Remi Tachet des Combes
Daniely and Schacham recently showed that gradient descent finds adversarial examples on random undercomplete two-layers ReLU neural networks. The term “undercomplete” refers to the fact that their proof only holds when the number of neurons is a vanishing fraction of the ambient dimension. We extend their result to the overcomplete case, where the number of neurons is larger than the dimension (yet also subexponential in the dimension). In fact we prove that a single step of gradient descent suffices. We also show this result for any subexponential width random neural network with smooth activation function.
Author Information
Sebastien Bubeck (Microsoft Research)
Yeshwanth Cherapanamjeri (UC Berkeley)
Gauthier Gidel (Mila)
Remi Tachet des Combes (MSR Montreal)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: A single gradient step finds adversarial examples on random two-layers neural networks »
Dates n/a. Room None
More from the Same Authors
-
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Spotlight 4: Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 Poster: Dr Jekyll & Mr Hyde: the strange case of off-policy policy updates »
Romain Laroche · Remi Tachet des Combes -
2021 Poster: Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity »
Nicolas Loizou · Hugo Berard · Gauthier Gidel · Ioannis Mitliagkas · Simon Lacoste-Julien -
2021 Poster: Adversarial Examples in Multi-Layer Random ReLU Networks »
Peter Bartlett · Sebastien Bubeck · Yeshwanth Cherapanamjeri -
2021 Poster: A Universal Law of Robustness via Isoperimetry »
Sebastien Bubeck · Mark Sellke -
2021 Oral: A Universal Law of Robustness via Isoperimetry »
Sebastien Bubeck · Mark Sellke -
2020 Poster: On Adaptive Distance Estimation »
Yeshwanth Cherapanamjeri · Jelani Nelson -
2020 Spotlight: On Adaptive Distance Estimation »
Yeshwanth Cherapanamjeri · Jelani Nelson -
2020 Poster: Network size and size of the weights in memorization with two-layers neural networks »
Sebastien Bubeck · Ronen Eldan · Yin Tat Lee · Dan Mikulincer -
2019 Poster: Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers »
Hadi Salman · Jerry Li · Ilya Razenshteyn · Pengchuan Zhang · Huan Zhang · Sebastien Bubeck · Greg Yang -
2019 Spotlight: Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers »
Hadi Salman · Jerry Li · Ilya Razenshteyn · Pengchuan Zhang · Huan Zhang · Sebastien Bubeck · Greg Yang -
2019 Poster: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2019 Spotlight: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2018 Poster: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee -
2018 Oral: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee -
2018 Poster: Is Q-Learning Provably Efficient? »
Chi Jin · Zeyuan Allen-Zhu · Sebastien Bubeck · Michael Jordan