Timezone: »

 
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Hanjie Chen · Yangfeng Ji
Event URL: https://eventhosts.gather.town/app/kR7ip0Bhhn8BXuMD/wiml-workshop-2021 »

Neural language models show vulnerability to adversarial examples which are semantically similar to their original counterparts with a few words replaced by their synonyms. A common way to improve model robustness is adversarial training which follows two steps—collecting adversarial examples by attacking a target model, and fine-tuning the model on the augmented dataset with these adversarial examples. The objective of traditional adversarial training is to make a model produce the same correct predictions on an original/adversarial example pair. However, the consistency between model decision-makings on two similar texts is ignored. We argue that a robust model should behave consistently on original/adversarial example pairs, that is making the same predictions (what) based on the same reasons (how) which can be reflected by consistent interpretations. In this work, we propose a novel feature-level adversarial training method named FLAT. FLAT aims at improving model robustness in terms of both predictions and interpretations. FLAT incorporates variational word masks in neural networks to learn global word importance and play as a bottleneck teaching the model to make predictions based on important words. FLAT explicitly shoots at the vulnerability problem caused by the mismatch between model understandings on the replaced words and their synonyms in original/adversarial example pairs by regularizing the corresponding global word importance scores. Experiments show the effectiveness of FLAT in improving the robustness with respect to both predictions and interpretations of four neural network models (LSTM, CNN, BERT, and DeBERTa) to two adversarial attacks on four text classification tasks. The models trained via FLAT also show better robustness than baseline models on unforeseen adversarial examples across different attacks.

Author Information

Hanjie Chen (University of Virginia)
Yangfeng Ji (University of Virginia)

More from the Same Authors

  • 2022 : Explaining Predictive Uncertainty by Looking Back at Model Explanations »
    Hanjie Chen · Wanyu Du · Yangfeng Ji
  • 2022 : Information-Theoretic Evaluation of Free-Text Rationales with Conditional $\mathcal{V}$-Information »
    Hanjie Chen · Faeze Brahman · Xiang Ren · Yangfeng Ji · Yejin Choi · Swabha Swayamdipta
  • 2023 Tutorial: Data Contribution Estimation for Machine Learning »
    Stephanie Schoch · Ruoxi Jia · Yangfeng Ji
  • 2022 Poster: CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification »
    Stephanie Schoch · Haifeng Xu · Yangfeng Ji
  • 2019 : Poster Session »
    Nathalie Baracaldo · Seth Neel · Tuyen Le · Dan Philps · Suheng Tao · Sotirios Chatzis · Toyo Suzumura · Wei Wang · WENHANG BAO · Solon Barocas · Manish Raghavan · Samuel Maina · Reginald Bryant · Kush Varshney · Skyler D. Speakman · Navdeep Gill · Nicholas Schmidt · Kevin Compher · Naveen Sundar Govindarajulu · Vivek Sharma · Praneeth Vepakomma · Tristan Swedish · Jayashree Kalpathy-Cramer · Ramesh Raskar · Shihao Zheng · Mykola Pechenizkiy · Marco Schreyer · Li Ling · Chirag Nagpal · Robert Tillman · Manuela Veloso · Hanjie Chen · Xintong Wang · Michael Wellman · Matthew van Adelsberg · Ben Wood · Hans Buehler · Mahmoud Mahfouz · Antonios Alexos · Megan Shearer · Antigoni Polychroniadou · Lucia Larise Stavarache · Dmitry Efimov · Johnston P Hall · Yukun Zhang · Emily Diana · Sumitra Ganesh · Vineeth Ravi · · Swetasudha Panda · Xavier Renard · Matthew Jagielski · Yonadav Shavit · Joshua Williams · Haoran Wei · Shuang (Sophie) Zhai · Xinyi Li · Hongda Shen · Daiki Matsunaga · Jaesik Choi · Alexis Laignelet · Batuhan Guler · Jacobo Roa Vicens · Ajit Desai · Jonathan Aigrain · Robert Samoilescu