Skip to yearly menu bar Skip to main content

( events)   Timezone:  
Fri Dec 07 05:00 AM -- 03:30 PM (PST) @ Room 513DEF
Workshop on Security in Machine Learning
Nicolas Papernot · Jacob Steinhardt · Matt Fredrikson · Kamalika Chaudhuri · Florian Tramer

Workshop Home Page

There is growing recognition that ML exposes new vulnerabilities in software systems. Some of the threat vectors explored so far include training data poisoning, adversarial examples or model extraction. Yet, the technical community's understanding of the nature and extent of the resulting vulnerabilities remains limited. This is due in part to (1) the large attack surface exposed by ML algorithms because they were designed for deployment in benign environments---as exemplified by the IID assumption for training and test data, (2) the limited availability of theoretical tools to analyze generalization, (3) the lack of reliable confidence estimates. In addition, the majority of work so far has focused on a small set of application domains and threat models.

This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work that contribute to address these challenges. Our agenda will complement contributed papers with invited speakers. The latter will emphasize connections between ML security and other research areas such as accountability or formal verification, as well as stress social aspects of ML misuses. We hope this will help identify fundamental directions for future cross-community collaborations, thus charting a path towards secure and trustworthy ML.

Sever: A Robust Meta-Algorithm for Stochastic Optimization by Jerry Li (Contributed talk)
Semidefinite relaxations for certifying robustness to adversarial examples by Aditi Raghunathan (Invited Talk)
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models (Contributed talk)
A Sociotechnical Approach to Security in Machine Learning by danah boyd (Keynote)
Law and Adversarial Machine Learning (Contributed talk)
Interpretability for when NOT to use machine learning by Been Kim (Invited Talk)
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures (Contributed talk)
Semantic Adversarial Examples by Somesh Jha (Invited Talk)
Safety verification for neural networks with provable guarantees by Marta Kwiatkowska (Invited Talk)
Model Poisoning Attacks in Federated Learning (Contributed talk)