Skip to yearly menu bar Skip to main content


Poster

On the Limitations of Stochastic Pre-processing Defenses

Yue Gao · I Shumailov · Kassem Fawaz · Nicolas Papernot

Hall J (level 1) #906

Keywords: [ limitations ] [ preprocessing defenses ] [ randomized defenses ] [ input transformation ] [ Adversarial examples ]


Abstract:

Defending against adversarial examples remains an open problem. A common belief is that randomness at inference increases the cost of finding adversarial inputs. An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model. In this paper, we empirically and theoretically investigate such stochastic pre-processing defenses and demonstrate that they are flawed. First, we show that most stochastic defenses are weaker than previously thought; they lack sufficient randomness to withstand even standard attacks like projected gradient descent. This casts doubt on a long-held assumption that stochastic defenses invalidate attacks designed to evade deterministic defenses and force attackers to integrate the Expectation over Transformation (EOT) concept. Second, we show that stochastic defenses confront a trade-off between adversarial robustness and model invariance; they become less effective as the defended model acquires more invariance to their randomization. Future work will need to decouple these two effects. We also discuss implications and guidance for future research.

Chat is not available.