Timezone: »
The study of provable adversarial robustness has mostly been limited to classification tasks and models with one-dimensional real-valued outputs. We extend the scope of certifiable robustness to problems with more general and structured outputs like sets, images, language, etc. We model the output space as a metric space under a distance/similarity function, such as intersection-over-union, perceptual similarity, total variation distance, etc. Such models are used in many machine learning problems like image segmentation, object detection, generative models, image/audio-to-text systems, etc. Based on a robustness technique called randomized smoothing, our center smoothing procedure can produce models with the guarantee that the change in the output, as measured by the distance metric, remains small for any norm-bounded adversarial perturbation of the input. We apply our method to create certifiably robust models with disparate output spaces -- from sets to images -- and show that it yields meaningful certificates without significantly degrading the performance of the base model.
Author Information
Aounon Kumar (University of Maryland, College Park)
Tom Goldstein (Rice University)
More from the Same Authors
-
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2021 : Diurnal or Nocturnal? Federated Learning from Periodically Shifting Distributions »
Chen Zhu · Zheng Xu · Mingqing Chen · Jakub Konečný · Andrew S Hard · Tom Goldstein -
2021 : Learning Revenue-Maximizing Auctions With Differentiable Matching »
Michael Curry · Uro Lyi · Tom Goldstein · John P Dickerson -
2021 : Learning Revenue-Maximizing Auctions With Differentiable Matching »
Michael Curry · Uro Lyi · Tom Goldstein · John P Dickerson -
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2021 Poster: Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks »
Avi Schwarzschild · Eitan Borgnia · Arjun Gupta · Furong Huang · Uzi Vishkin · Micah Goldblum · Tom Goldstein -
2021 Poster: VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization »
Mucong Ding · Kezhi Kong · Jingling Li · Chen Zhu · John Dickerson · Furong Huang · Tom Goldstein -
2021 Poster: GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training »
Chen Zhu · Renkun Ni · Zheng Xu · Kezhi Kong · W. Ronny Huang · Tom Goldstein -
2021 Poster: Gradient-Free Adversarial Training Against Image Corruption for Learning-based Steering »
Yu Shen · Laura Zheng · Manli Shu · Weizi Li · Tom Goldstein · Ming Lin -
2021 Poster: Adversarial Examples Make Strong Poisons »
Liam Fowl · Micah Goldblum · Ping-yeh Chiang · Jonas Geiping · Wojciech Czaja · Tom Goldstein -
2021 Poster: Encoding Robustness to Image Style via Adversarial Feature Perturbations »
Manli Shu · Zuxuan Wu · Micah Goldblum · Tom Goldstein -
2021 Poster: Long-Short Transformer: Efficient Transformers for Language and Vision »
Chen Zhu · Wei Ping · Chaowei Xiao · Mohammad Shoeybi · Tom Goldstein · Anima Anandkumar · Bryan Catanzaro -
2020 Poster: Detection as Regression: Certified Object Detection with Median Smoothing »
Ping-yeh Chiang · Michael Curry · Ahmed Abdelkader · Aounon Kumar · John Dickerson · Tom Goldstein