Timezone: »
Generative models have grown into the workhorse of many state-of-the-art machine learning methods. However, their vulnerability under poisoning attacks has been largely understudied. In this work, we investigate this issue in the context of continual learning, where generative replayers are utilized to tackle catastrophic forgetting. By developing a novel customization of dirty-label input-aware backdoor to the online setting, our attacker manages to stealthily promote forgetting while retaining high accuracy at the current task and sustaining strong defenders. Our approach taps into an intriguing property of generative models, namely that they cannot well capture input-dependent triggers. Experiments on four standard datasets corroborate the poisoner's effectiveness.
Author Information
Siteng Kang (University of Illinois at Chicago)
Xinhua Zhang (University of Illinois at Chicago (UIC))
More from the Same Authors
-
2022 : Poisoning Generative Models to Promote Catastrophic Forgetting »
Siteng Kang · Xinhua Zhang -
2022 Poster: Moment Distributionally Robust Tree Structured Prediction »
Yeshu Li · Danyal Saeed · Xinhua Zhang · Brian Ziebart · Kevin Gimpel -
2022 Poster: Certifying Robust Graph Classification under Orthogonal Gromov-Wasserstein Threats »
Hongwei Jin · Zishun Yu · Xinhua Zhang -
2020 Poster: Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks »
Hongwei Jin · Zhan Shi · Venkata Jaya Shankar Ashish Peruri · Xinhua Zhang -
2020 Spotlight: Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks »
Hongwei Jin · Zhan Shi · Venkata Jaya Shankar Ashish Peruri · Xinhua Zhang -
2020 Poster: Proximal Mapping for Deep Regularization »
Mao Li · Yingyi Ma · Xinhua Zhang -
2020 Spotlight: Proximal Mapping for Deep Regularization »
Mao Li · Yingyi Ma · Xinhua Zhang -
2018 Poster: Distributionally Robust Graphical Models »
Rizal Fathony · Ashkan Rezaei · Mohammad Ali Bashiri · Xinhua Zhang · Brian Ziebart