Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification

Kathrin Grosse: On the Limitations of Bayesian Uncertainty in Adversarial Settings.

Kathrin Grosse


Abstract:

Adversarial examples have been recognized as a threat, and still pose problems, as it is hard to defend them. Naturally, one might be tempted to think that an image looking like a panda and being classified as a gibbon might be unusual-or at least unusual enough to be discovered by for example Bayesian uncertainty measures. Alas, it turns out that also Bayesian confidence and uncertainty measures are easy to fool when the optimization procedure is adapted accordingly. Moreover, adversarial examples transfer between different methods, so they can also be attacked in a black box setting. To conclude the talk, we will discuss briefly the practical necessity to defend evasion, and what is needed to not only evaluate defenses properly, but also build practical defenses.

Chat is not available.