Timezone: »
Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined triggers. As various attack and defense models have been proposed, it is of great significance to perform rigorous evaluations. However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e.g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving. To address these issues, we categorize existing works into three practical scenarios in which attackers release datasets, pre-trained models, and fine-tuned models respectively, then discuss their unique evaluation methodologies. On metrics, to completely evaluate poisoned samples, we use grammar error increase and perplexity difference for stealthiness, along with text similarity for validity. After formalizing the frameworks, we develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning. With this toolkit, we perform extensive experiments to benchmark attack and defense models under the suggested paradigm. To facilitate the underexplored defenses against poisoned datasets, we further propose CUBE, a simple yet strong clustering-based defense baseline. We hope that our frameworks and benchmarks could serve as the cornerstones for future model development and evaluations.
Author Information
Ganqu Cui (Tsinghua University, Tsinghua University)
Lifan Yuan (Huazhong University of Science and Technology)
I am a final-year undergraduate student in School of Artificial Intelligence and Automation, Huazhong University of Science and Technology. My research interests lie in trustworthy NLP systems, improving their security and robustness and establishing appropriate evaluation frameworks for them.
Bingxiang He (清华大学)
Yangyi Chen (Huazhong University of Science and Technology)
Zhiyuan Liu (Tsinghua University)
Maosong Sun (Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks »
Thu. Dec 8th 05:00 -- 07:00 PM Room
More from the Same Authors
-
2022 Poster: Sparse Structure Search for Delta Tuning »
Shengding Hu · Zhen Zhang · Ning Ding · Yadao Wang · Yasheng Wang · Zhiyuan Liu · Maosong Sun -
2022 Poster: Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models »
Biru Zhu · Yujia Qin · Ganqu Cui · Yangyi Chen · Weilin Zhao · Chong Fu · Yangdong Deng · Zhiyuan Liu · Jingang Wang · Wei Wu · Maosong Sun · Ming Gu -
2022 Spotlight: Lightning Talks 5A-4 »
Yangrui Chen · Zhiyang Chen · Liang Zhang · Hanqing Wang · Jiaqi Han · Shuchen Wu · shaohui peng · Ganqu Cui · Yoav Kolumbus · Noemi Elteto · Xing Hu · Anwen Hu · Wei Liang · Cong Xie · Lifan Yuan · Noam Nisan · Wenbing Huang · Yousong Zhu · Ishita Dasgupta · Luc V Gool · Tingyang Xu · Rui Zhang · Qin Jin · Zhaowen Li · Meng Ma · Bingxiang He · Yangyi Chen · Juncheng Gu · Wenguan Wang · Ke Tang · Yu Rong · Eric Schulz · Fan Yang · Wei Li · Zhiyuan Liu · Jiaming Guo · Yanghua Peng · Haibin Lin · Haixin Wang · Qi Yi · Maosong Sun · Ruizhi Chen · Chuan Wu · Chaoyang Zhao · Yibo Zhu · Liwei Wu · xishan zhang · Zidong Du · Rui Zhao · Jinqiao Wang · Ling Li · Qi Guo · Ming Tang · Yunji Chen -
2020 Poster: Graph Policy Network for Transferable Active Learning on Graphs »
Shengding Hu · Zheng Xiong · Meng Qu · Xingdi Yuan · Marc-Alexandre Côté · Zhiyuan Liu · Jian Tang -
2020 Poster: Towards Interpretable Natural Language Understanding with Explanations as Latent Variables »
Wangchunshu Zhou · Jinyi Hu · Hanlin Zhang · Xiaodan Liang · Maosong Sun · Chenyan Xiong · Jian Tang -
2012 Poster: Monte Carlo Methods for Maximum Margin Supervised Topic Models »
Qixia Jiang · Jun Zhu · Maosong Sun · Eric Xing