Skip to yearly menu bar Skip to main content


Handcrafted Backdoors in Deep Neural Networks

Sanghyun Hong · Nicholas Carlini · Alexey Kurakin

Hall J (level 1) #512

Keywords: [ Neural Networks ] [ handcrafting model parameters ] [ supply-chain attack ] [ backdoor attacks ]

Abstract: When machine learning training is outsourced to third parties, $backdoor$ $attacks$ become practical as the third party who trains the model may act maliciously to inject hidden behaviors into the otherwise accurate model. Until now, the mechanism to inject backdoors has been limited to $poisoning$. We argue that a supply-chain attacker has more attack techniques available by introducing a $handcrafted$ attack that directly manipulates a model's weights. This direct modification gives our attacker more degrees of freedom compared to poisoning, and we show it can be used to evade many backdoor detection or removal defenses effectively. Across four datasets and four network architectures our backdoor attacks maintain an attack success rate above 96%. Our results suggest that further research is needed for understanding the complete space of supply-chain backdoor attacks.

Chat is not available.