Timezone: »

Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Ginevra Carbone · Matthew Wicker · Luca Laurenti · Andrea Patane' · Luca Bortolussi · Guido Sanguinetti

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #919

Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse the geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i.e., when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST datasets with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks.

Author Information

Ginevra Carbone (University of Trieste)
Matthew Wicker (University of Oxford)
Luca Laurenti (University of Oxford)
Andrea Patane' (University of Oxford)
Luca Bortolussi (University of Trieste, Department of Mathematics and Geosciences)
Guido Sanguinetti (University of Edinburgh)

More from the Same Authors