Timezone: »

 
Poster
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi · W. Ronny Huang · Mahyar Najibi · Octavian Suciu · Christoph Studer · Tudor Dumitras · Tom Goldstein

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 210 #41

Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use ``clean-labels''; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a specific test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by putting them online and waiting for them to be scraped by a data collection bot.

We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used. For full end-to-end training, we present a ``watermarking'' strategy that makes poisoning reliable using multiple (approx. 50) poisoned training instances. We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.

Author Information

Ali Shafahi (University of Maryland)
W. Ronny Huang (UMCP and EY)
Mahyar Najibi (University of Maryland)
Octavian Suciu (University of Maryland)
Christoph Studer (Cornell University)
Tudor Dumitras (University of Maryland)
Tom Goldstein (University of Maryland)

More from the Same Authors