Timezone: »

 
Poster
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers
Guang-He Lee · Yang Yuan · Shiyu Chang · Tommi Jaakkola

Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #4
Strong theoretical guarantees of robustness can be given for ensembles of classifiers generated by input randomization. Specifically, an $\ell_2$ bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest. We build on and considerably expand this work across broad classes of distributions. In particular, we offer adversarial robustness guarantees and associated algorithms for the discrete case where the adversary is $\ell_0$ bounded. Moreover, we exemplify how the guarantees can be tightened with specific assumptions about the function class of the classifier such as a decision tree. We empirically illustrate these results with and without functional restrictions across image and molecule datasets.

Author Information

Guang-He Lee (MIT)
Yang Yuan (MIT)
Shiyu Chang (IBM T.J. Watson Research Center)
Tommi Jaakkola (MIT)

Tommi Jaakkola is a professor of Electrical Engineering and Computer Science at MIT. He received an M.Sc. degree in theoretical physics from Helsinki University of Technology, and Ph.D. from MIT in computational neuroscience. Following a Sloan postdoctoral fellowship in computational molecular biology, he joined the MIT faculty in 1998. His research interests include statistical inference, graphical models, and large scale modern estimation problems with predominantly incomplete data.

More from the Same Authors