Timezone: »
Spotlight
Robust Learning against Relational Adversaries
Yizhen Wang · Mohannad Alhanahnah · Xiaozhu Meng · Ke Wang · Mihai Christodorescu · Somesh Jha
Test-time adversarial attacks have posed serious challenges to the robustness of machine-learning models, and in many settings the adversarial perturbation need not be bounded by small $\ell_p$-norms. Motivated by attacks in program analysis and security tasks, we investigate $\textit{relational adversaries}$, a broad class of attackers who create adversarial examples in a reflexive-transitive closure of a logical relation. We analyze the conditions for robustness against relational adversaries and investigate different levels of robustness-accuracy trade-off due to various patterns in a relation. Inspired by the insights, we propose $\textit{normalize-and-predict}$, a learning framework that leverages input normalization to achieve provable robustness. The framework solves the pain points of adversarial training against relational adversaries and can be combined with adversarial training for the benefits of both approaches. Guided by our theoretical findings, we apply our framework to source code authorship attribution and malware detection. Results of both tasks show our learning framework significantly improves the robustness of models against relational adversaries. In the process, it outperforms adversarial training, the most noteworthy defense mechanism, by a wide margin.
Author Information
Yizhen Wang (VISA)
Mohannad Alhanahnah (University of Wisconsin-Madison)
Xiaozhu Meng (Rice University)
Ke Wang (Visa Research)
Mihai Christodorescu (Google)
Somesh Jha (University of Wisconsin, Madison)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Robust Learning against Relational Adversaries »
Wed. Nov 30th through Dec 1st Room Hall J #332
More from the Same Authors
-
2022 : Best of Both Worlds: Towards Adversarial Robustness with Transduction and Rejection »
Nils Palumbo · Yang Guo · Xi Wu · Jiefeng Chen · Yingyu Liang · Somesh Jha -
2022 Panel: Panel 4A-1: Rethinking Lipschitz Neural… & Robust Learning against… »
Yizhen Wang · Bohang Zhang -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Poster: Overparameterization from Computational Constraints »
Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Mingyuan Wang -
2022 Poster: A Quantitative Geometric Approach to Neural-Network Smoothness »
Zi Wang · Gautam Prakriya · Somesh Jha -
2021 Poster: Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles »
Jiefeng Chen · Frederick Liu · Besim Avci · Xi Wu · Yingyu Liang · Somesh Jha -
2021 Poster: A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks »
Samuel Deng · Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Abhradeep Guha Thakurta -
2019 Poster: Attribution-Based Confidence Metric For Deep Neural Networks »
Susmit Jha · Sunny Raj · Steven Fernandes · Sumit K Jha · Somesh Jha · Brian Jalaian · Gunjan Verma · Ananthram Swami -
2019 Poster: Robust Attribution Regularization »
Jiefeng Chen · Xi Wu · Vaibhav Rastogi · Yingyu Liang · Somesh Jha -
2018 : Semantic Adversarial Examples by Somesh Jha »
Somesh Jha