Timezone: »

 
Poster
Certifying Confidence via Randomized Smoothing
Aounon Kumar · Alexander Levine · Soheil Feizi · Tom Goldstein

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1580

Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems. It uses the probabilities of predicting the top two most-likely classes around an input point under a smoothing distribution to generate a certified radius for a classifier's prediction. However, most smoothing methods do not give us any information about the \emph{confidence} with which the underlying classifier (e.g., deep neural network) makes a prediction. In this work, we propose a method to generate certified radii for the prediction confidence of the smoothed classifier. We consider two notions for quantifying confidence: average prediction score of a class and the margin by which the average prediction score of one class exceeds that of another. We modify the Neyman-Pearson lemma (a key theorem in randomized smoothing) to design a procedure for computing the certified radius where the confidence is guaranteed to stay above a certain threshold. Our experimental results on CIFAR-10 and ImageNet datasets show that using information about the distribution of the confidence scores allows us to achieve a significantly better certified radius than ignoring it. Thus, we demonstrate that extra information about the base classifier at the input point can help improve certified guarantees for the smoothed classifier. Code for the experiments is available at \url{https://github.com/aounon/cdf-smoothing}.

Author Information

Aounon Kumar (University of Maryland, College Park)
Alexander Levine (University of Maryland, College Park)
Soheil Feizi (University of Maryland)
Tom Goldstein (University of Maryland)

More from the Same Authors