Timezone: »
Tight estimation of the Lipschitz constant for deep neural networks (DNNs) is useful in many applications ranging from robustness certification of classifiers to stability analysis of closed-loop systems with reinforcement learning controllers. Existing methods in the literature for estimating the Lipschitz constant suffer from either lack of accuracy or poor scalability. In this paper, we present a convex optimization framework to compute guaranteed upper bounds on the Lipschitz constant of DNNs both accurately and efficiently. Our main idea is to interpret activation functions as gradients of convex potential functions. Hence, they satisfy certain properties that can be described by quadratic constraints. This particular description allows us to pose the Lipschitz constant estimation problem as a semidefinite program (SDP). The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation). We illustrate the utility of our approach with a variety of experiments on randomly generated networks and on classifiers trained on the MNIST and Iris datasets. In particular, we experimentally demonstrate that our Lipschitz bounds are the most accurate compared to those in the literature. We also study the impact of adversarial training methods on the Lipschitz bounds of the resulting classifiers and show that our bounds can be used to efficiently provide robustness guarantees.
Author Information
Mahyar Fazlyab (University of Pennsylvania)
Alexander Robey (University of Pennsylvania)
Hamed Hassani (UPenn)
Manfred Morari (University of Pennsylvania)
George J. Pappas (University of Pennsylvania)
George J. Pappas is the UPS Foundation Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation PECASE, and the George H. Heilmeier Faculty Excellence Award.
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Tue. Dec 10th 06:30 -- 06:35 PM Room West Exhibition Hall C + B3
More from the Same Authors
-
2023 Poster: The noise level in linear regression with dependent data »
Ingvar Ziemann · Stephen Tu · George J. Pappas · Nikolai Matni -
2023 Poster: Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization »
Mahyar Fazlyab · Taha Entesari · Aniket Roy · Rama Chellappa -
2022 Spotlight: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints »
Xinmeng Huang · Donghwan Lee · Edgar Dobriban · Hamed Hassani -
2022 Poster: NOMAD: Nonlinear Manifold Decoders for Operator Learning »
Jacob Seidman · Georgios Kissas · Paris Perdikaris · George J. Pappas -
2022 Poster: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: FedAvg with Fine Tuning: Local Updates Lead to Representation Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2022 Poster: Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds »
Aritra Mitra · Arman Adibi · George J. Pappas · Hamed Hassani -
2021 Poster: Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients »
Aritra Mitra · Rayana Jaafar · George J. Pappas · Hamed Hassani -
2021 Poster: Model-Based Domain Generalization »
Alexander Robey · George J. Pappas · Hamed Hassani -
2021 Poster: Adversarial Robustness with Semi-Infinite Constrained Learning »
Alexander Robey · Luiz Chamon · George J. Pappas · Hamed Hassani · Alejandro Ribeiro -
2021 Poster: Safe Pontryagin Differentiable Programming »
Wanxin Jin · Shaoshuai Mou · George J. Pappas -
2020 Poster: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Poster: Sinkhorn Barycenter via Functional Gradient Descent »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Spotlight: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Session: Orals & Spotlights Track 32: Optimization »
Hamed Hassani · Jeffrey A Bilmes -
2020 Poster: Submodular Meta-Learning »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2019 Poster: Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback »
Mingrui Zhang · Lin Chen · Hamed Hassani · Amin Karbasi -
2019 Poster: Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match »
Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen -
2019 Poster: Robust and Communication-Efficient Collaborative Learning »
Amirhossein Reisizadeh · Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2017 Poster: Gradient Methods for Submodular Maximization »
Hamed Hassani · Mahdi Soltanolkotabi · Amin Karbasi -
2017 Poster: Stochastic Submodular Maximization: The Case of Coverage Functions »
Mohammad Karimi · Mario Lucic · Hamed Hassani · Andreas Krause