Timezone: »

 
Poster
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin · Manli Shu · Eitan Borgnia · Furong Huang · Micah Goldblum · Tom Goldstein

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #737

Conventional saliency maps highlight input features to which neural network predictions are highly sensitive. We take a different approach to saliency, in which we identify and analyze the network parameters, rather than inputs, which are responsible for erroneous decisions. We first verify that identified salient parameters are indeed responsible for misclassification by showing that turning these parameters off improves predictions on the associated samples more than turning off the same number of random or least salient parameters. We further validate the link between salient parameters and network misclassification errors by observing that fine-tuning a small number of the most salient parameters on a single sample results in error correction on other samples which were misclassified for similar reasons -- nearest neighbors in the saliency space. After validating our parameter-space saliency maps, we demonstrate that samples which cause similar parameters to malfunction are semantically similar. Further, we introduce an input-space saliency counterpart which reveals how image features cause specific network components to malfunction.

Author Information

Roman Levin (University of Washington, Seattle)
Manli Shu (University of Maryland, College Park)
Eitan Borgnia (University of Maryland)
Furong Huang (University of Maryland)
Micah Goldblum (University of Maryland)
Tom Goldstein (University of Maryland)

More from the Same Authors