Timezone: »
Conventional saliency maps highlight input features to which neural network predictions are highly sensitive. We take a different approach to saliency, in which we identify and analyze the network parameters, rather than inputs, which are responsible for erroneous decisions. We first verify that identified salient parameters are indeed responsible for misclassification by showing that turning these parameters off improves predictions on the associated samples more than turning off the same number of random or least salient parameters. We further validate the link between salient parameters and network misclassification errors by observing that fine-tuning a small number of the most salient parameters on a single sample results in error correction on other samples which were misclassified for similar reasons -- nearest neighbors in the saliency space. After validating our parameter-space saliency maps, we demonstrate that samples which cause similar parameters to malfunction are semantically similar. Further, we introduce an input-space saliency counterpart which reveals how image features cause specific network components to malfunction.
Author Information
Roman Levin (University of Washington, Seattle)
Manli Shu (University of Maryland, College Park)
Eitan Borgnia (University of Maryland)
Furong Huang (University of Maryland)
Micah Goldblum (University of Maryland)
Tom Goldstein (University of Maryland)
More from the Same Authors
-
2020 : An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process »
David Tran · Alex Valtchanov · Keshav R Ganapathy · Raymond Feng · Eric Slud · Micah Goldblum · Tom Goldstein -
2021 : Execute Order 66: Targeted Data Poisoning for Reinforcement Learning via Minuscule Perturbations »
Harrison Foley · Liam Fowl · Tom Goldstein · Gavin Taylor -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2022 : Investigating Reproducibility from the Decision Boundary Perspective. »
Gowthami Somepalli · Arpit Bansal · Liam Fowl · Ping-yeh Chiang · Yehuda Dar · Richard Baraniuk · Micah Goldblum · Tom Goldstein -
2022 : A Deep Dive into Dataset Imbalance and Bias in Face Identification »
Valeriia Cherepanova · Steven Reich · Samuel Dooley · Hossein Souri · John Dickerson · Micah Goldblum · Tom Goldstein -
2022 : SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training »
Gowthami Somepalli · Avi Schwarzschild · Micah Goldblum · C. Bayan Bruss · Tom Goldstein -
2022 : Transfer Learning with Deep Tabular Models »
Roman Levin · Valeriia Cherepanova · Avi Schwarzschild · Arpit Bansal · C. Bayan Bruss · Tom Goldstein · Andrew Wilson · Micah Goldblum -
2022 : SMART: Self-supervised Multi-task pretrAining with contRol Transformers »
Yanchao Sun · shuang ma · Ratnesh Madaan · Rogerio Bonatti · Furong Huang · Ashish Kapoor -
2022 : Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning »
Souradip Chakraborty · Amrit Bedi · Alec Koppel · Furong Huang · Pratap Tokekar · Dinesh Manocha -
2022 : GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint »
Paiheng Xu · Yuhang Zhou · Bang An · Wei Ai · Furong Huang -
2022 : Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 : A Deep Dive into Dataset Imbalance and Bias in Face Identification »
Valeriia Cherepanova · Steven Reich · Samuel Dooley · Hossein Souri · John Dickerson · Micah Goldblum · Tom Goldstein -
2022 : On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition »
Samuel Dooley · Rhea Sukthanker · John Dickerson · Colin White · Frank Hutter · Micah Goldblum -
2022 : Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 : Faster Hyperparameter Search on Graphs via Calibrated Dataset Condensation »
Mucong Ding · Xiaoyu Liu · Tahseen Rabbani · Furong Huang -
2022 : On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition »
Samuel Dooley · Rhea Sukthanker · John Dickerson · Colin White · Frank Hutter · Micah Goldblum -
2022 : A Deep Dive into Dataset Imbalance and Bias in Face Identification »
Valeriia Cherepanova · Steven Reich · Samuel Dooley · Hossein Souri · John Dickerson · Micah Goldblum · Tom Goldstein -
2022 : Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries »
Yuxin Wen · Arpit Bansal · Hamid Kazemi · Eitan Borgnia · Micah Goldblum · Jonas Geiping · Tom Goldstein -
2022 : Panning for Gold in Federated Learning: Targeted Text Extraction under Arbitrarily Large-Scale Aggregation »
Hong-Min Chu · Jonas Geiping · Liam Fowl · Micah Goldblum · Tom Goldstein -
2022 : Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models »
Liam Fowl · Jonas Geiping · Steven Reich · Yuxin Wen · Wojciech Czaja · Micah Goldblum · Tom Goldstein -
2022 : On Representation Learning Under Class Imbalance »
Ravid Shwartz-Ziv · Micah Goldblum · Yucen Li · C. Bayan Bruss · Andrew Gordon Wilson -
2022 : DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations »
Eitan Borgnia · Jonas Geiping · Valeriia Cherepanova · Liam Fowl · Arjun Gupta · Amin Ghiasi · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 : Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function »
Ruijie Zheng · Xiyao Wang · Huazhe Xu · Furong Huang -
2022 : Contributed Talk: Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 Spotlight: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 : Transfer Learning with Deep Tabular Models »
Roman Levin · Valeriia Cherepanova · Avi Schwarzschild · Arpit Bansal · C. Bayan Bruss · Tom Goldstein · Andrew Wilson · Micah Goldblum -
2022 : SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication »
Marco Bornstein · Tahseen Rabbani · Evan Wang · Amrit Bedi · Furong Huang -
2022 Poster: Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 Poster: Robustness Disparities in Face Detection »
Samuel Dooley · George Z Wei · Tom Goldstein · John Dickerson -
2022 Poster: Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers »
Wanqian Yang · Polina Kirichenko · Micah Goldblum · Andrew Wilson -
2022 Poster: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models »
Manli Shu · Weili Nie · De-An Huang · Zhiding Yu · Tom Goldstein · Anima Anandkumar · Chaowei Xiao -
2022 Poster: Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2022 Poster: Autoregressive Perturbations for Data Poisoning »
Pedro Sandoval-Segura · Vasu Singla · Jonas Geiping · Micah Goldblum · Tom Goldstein · David Jacobs -
2022 Poster: Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2022 Poster: Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch »
Hossein Souri · Liam Fowl · Rama Chellappa · Micah Goldblum · Tom Goldstein -
2022 Poster: PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization »
Sanae Lotfi · Marc Finzi · Sanyam Kapoor · Andres Potapczynski · Micah Goldblum · Andrew Wilson -
2022 Poster: End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking »
Arpit Bansal · Avi Schwarzschild · Eitan Borgnia · Zeyad Emam · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 Poster: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 Poster: Transferring Fairness under Distribution Shifts via Fair Consistency Regularization »
Bang An · Zora Che · Mucong Ding · Furong Huang -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 Poster: Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks »
Avi Schwarzschild · Eitan Borgnia · Arjun Gupta · Furong Huang · Uzi Vishkin · Micah Goldblum · Tom Goldstein -
2021 Poster: Gradient-Free Adversarial Training Against Image Corruption for Learning-based Steering »
Yu Shen · Laura Zheng · Manli Shu · Weizi Li · Tom Goldstein · Ming Lin -
2021 Poster: Adversarial Examples Make Strong Poisons »
Liam Fowl · Micah Goldblum · Ping-yeh Chiang · Jonas Geiping · Wojciech Czaja · Tom Goldstein -
2021 Poster: Encoding Robustness to Image Style via Adversarial Feature Perturbations »
Manli Shu · Zuxuan Wu · Micah Goldblum · Tom Goldstein -
2020 : The Intrinsic Dimension of Images and Its Impact on Learning »
Chen Zhu · Micah Goldblum · Ahmed Abdelkader · Tom Goldstein · Phillip Pope -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Detection as Regression: Certified Object Detection with Median Smoothing »
Ping-yeh Chiang · Michael Curry · Ahmed Abdelkader · Aounon Kumar · John Dickerson · Tom Goldstein -
2020 Poster: Certifying Confidence via Randomized Smoothing »
Aounon Kumar · Alexander Levine · Soheil Feizi · Tom Goldstein -
2020 Poster: Adversarially Robust Few-Shot Learning: A Meta-Learning Approach »
Micah Goldblum · Liam Fowl · Tom Goldstein -
2020 Poster: MetaPoison: Practical General-purpose Clean-label Data Poisoning »
W. Ronny Huang · Jonas Geiping · Liam Fowl · Gavin Taylor · Tom Goldstein -
2020 Poster: Certifying Strategyproof Auction Networks »
Michael Curry · Ping-yeh Chiang · Tom Goldstein · John Dickerson -
2019 Poster: Adversarial training for free! »
Ali Shafahi · Mahyar Najibi · Mohammad Amin Ghiasi · Zheng Xu · John Dickerson · Christoph Studer · Larry Davis · Gavin Taylor · Tom Goldstein -
2018 Poster: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks »
Ali Shafahi · W. Ronny Huang · Mahyar Najibi · Octavian Suciu · Christoph Studer · Tudor Dumitras · Tom Goldstein -
2018 Poster: Visualizing the Loss Landscape of Neural Nets »
Hao Li · Zheng Xu · Gavin Taylor · Christoph Studer · Tom Goldstein -
2017 Poster: Training Quantized Nets: A Deeper Understanding »
Hao Li · Soham De · Zheng Xu · Christoph Studer · Hanan Samet · Tom Goldstein -
2015 : Spotlight »
Furong Huang · William Gray Roncal · Tom Goldstein -
2015 Poster: Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing »
Tom Goldstein · Min Li · Xiaoming Yuan