Timezone: »
In recent years, neural backdoor attack has been considered to be a potential security threat to deep learning systems. Such systems, while achieving the state-of-the-art performance on clean data, perform abnormally on inputs with predefined triggers. Current backdoor techniques, however, rely on uniform trigger patterns, which are easily detected and mitigated by current defense methods. In this work, we propose a novel backdoor attack technique in which the triggers vary from input to input. To achieve this goal, we implement an input-aware trigger generator driven by diversity loss. A novel cross-trigger test is applied to enforce trigger nonreusablity, making backdoor verification impossible. Experiments show that our method is efficient in various attack scenarios as well as multiple datasets. We further demonstrate that our backdoor can bypass the state of the art defense methods. An analysis with a famous neural network inspector again proves the stealthiness of the proposed attack. Our code is publicly available.
Author Information
Tuan Anh Nguyen (VinAI Research/Hanoi University of Science and Technology)
Anh Tran (VinAI Research)
More from the Same Authors
-
2022 : Transferability Between Regression Tasks »
Cuong Ngoc Nguyen · Phong Tran The · Lam Ho · Vu Dinh · Anh Tran · Tal Hassner · Cuong V. Nguyen -
2022 Poster: QC-StyleGAN - Quality Controllable Image Generation and Manipulation »
Dat Viet Thanh Nguyen · Phong Tran The · Tan M. Dinh · Cuong Pham · Anh Tran -
2021 Poster: Exploiting Domain-Specific Features to Enhance Domain Generalization »
Manh-Ha Bui · Toan Tran · Anh Tran · Dinh Phung -
2021 Poster: On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources »
Trung Phung · Trung Le · Tung-Long Vuong · Toan Tran · Anh Tran · Hung Bui · Dinh Phung