## Robust Learning against Relational Adversaries

### Yizhen Wang · Mohannad Alhanahnah · Xiaozhu Meng · Ke Wang · Mihai Christodorescu · Somesh Jha

##### Hall J #332

Keywords: [ adversarial machine learning ] [ defense mechanism with guarantee ] [ relational adversaries ] [ input normalization ] [ input transformation ]

[ Abstract ]
[ [ [
Wed 30 Nov 2 p.m. PST — 4 p.m. PST

Spotlight presentation: Lightning Talks 2A-2
Tue 6 Dec 5:30 p.m. PST — 5:45 p.m. PST

Abstract: Test-time adversarial attacks have posed serious challenges to the robustness of machine-learning models, and in many settings the adversarial perturbation need not be bounded by small $\ell_p$-norms. Motivated by attacks in program analysis and security tasks, we investigate $\textit{relational adversaries}$, a broad class of attackers who create adversarial examples in a reflexive-transitive closure of a logical relation. We analyze the conditions for robustness against relational adversaries and investigate different levels of robustness-accuracy trade-off due to various patterns in a relation. Inspired by the insights, we propose $\textit{normalize-and-predict}$, a learning framework that leverages input normalization to achieve provable robustness. The framework solves the pain points of adversarial training against relational adversaries and can be combined with adversarial training for the benefits of both approaches. Guided by our theoretical findings, we apply our framework to source code authorship attribution and malware detection. Results of both tasks show our learning framework significantly improves the robustness of models against relational adversaries. In the process, it outperforms adversarial training, the most noteworthy defense mechanism, by a wide margin.

Chat is not available.