Timezone: »
There is growing recognition that ML exposes new vulnerabilities in software systems. Some of the threat vectors explored so far include training data poisoning, adversarial examples or model extraction. Yet, the technical community's understanding of the nature and extent of the resulting vulnerabilities remains limited. This is due in part to (1) the large attack surface exposed by ML algorithms because they were designed for deployment in benign environments---as exemplified by the IID assumption for training and test data, (2) the limited availability of theoretical tools to analyze generalization, (3) the lack of reliable confidence estimates. In addition, the majority of work so far has focused on a small set of application domains and threat models.
This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work that contribute to address these challenges. Our agenda will complement contributed papers with invited speakers. The latter will emphasize connections between ML security and other research areas such as accountability or formal verification, as well as stress social aspects of ML misuses. We hope this will help identify fundamental directions for future cross-community collaborations, thus charting a path towards secure and trustworthy ML.
Fri 6:00 a.m. - 6:15 a.m.
|
Sever: A Robust Meta-Algorithm for Stochastic Optimization by Jerry Li
(
Contributed talk
)
|
🔗 |
Fri 6:15 a.m. - 6:45 a.m.
|
Semidefinite relaxations for certifying robustness to adversarial examples by Aditi Raghunathan
(
Invited Talk
)
|
🔗 |
Fri 6:45 a.m. - 7:00 a.m.
|
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
(
Contributed talk
)
|
🔗 |
Fri 8:00 a.m. - 8:45 a.m.
|
A Sociotechnical Approach to Security in Machine Learning by danah boyd
(
Keynote
)
|
danah boyd 🔗 |
Fri 8:45 a.m. - 9:00 a.m.
|
Law and Adversarial Machine Learning
(
Contributed talk
)
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Interpretability for when NOT to use machine learning by Been Kim
(
Invited Talk
)
|
Been Kim 🔗 |
Fri 11:00 a.m. - 11:15 a.m.
|
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures
(
Contributed talk
)
|
🔗 |
Fri 11:15 a.m. - 11:45 a.m.
|
Semantic Adversarial Examples by Somesh Jha
(
Invited Talk
)
|
Somesh Jha 🔗 |
Fri 1:15 p.m. - 1:45 p.m.
|
Safety verification for neural networks with provable guarantees by Marta Kwiatkowska
(
Invited Talk
)
|
Marta Kwiatkowska 🔗 |
Fri 1:45 p.m. - 2:00 p.m.
|
Model Poisoning Attacks in Federated Learning
(
Contributed talk
)
|
🔗 |
Author Information
Nicolas Papernot (Google Brain)
Jacob Steinhardt (UC Berkeley)
Matt Fredrikson (CMU)
Kamalika Chaudhuri (UCSD)
Florian Tramer (Stanford University)
More from the Same Authors
-
2021 Spotlight: Learning Equilibria in Matching Markets from Bandit Feedback »
Meena Jagadeesan · Alexander Wei · Yixin Wang · Michael Jordan · Jacob Steinhardt -
2021 : Measuring Coding Challenge Competence With APPS »
Dan Hendrycks · Steven Basart · Saurav Kadavath · Mantas Mazeika · Akul Arora · Ethan Guo · Collin Burns · Samir Puranik · Horace He · Dawn Song · Jacob Steinhardt -
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2021 : PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures »
Dan Hendrycks · Andy Zou · Mantas Mazeika · Leonard Tang · Dawn Song · Jacob Steinhardt -
2021 : Effect of Model Size on Worst-group Generalization »
Alan Pham · Eunice Chan · Vikranth Srivatsa · Dhruba Ghosh · Yaoqing Yang · Yaodong Yu · Ruiqi Zhong · Joseph Gonzalez · Jacob Steinhardt -
2021 : The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models »
Alexander Pan · Kush Bhatia · Jacob Steinhardt -
2021 : What Would Jiminy Cricket Do? Towards Agents That Behave Morally »
Dan Hendrycks · Mantas Mazeika · Andy Zou · Sahil Patel · Christine Zhu · Jesus Navarro · Dawn Song · Bo Li · Jacob Steinhardt -
2021 : Measuring Mathematical Problem Solving With the MATH Dataset »
Dan Hendrycks · Collin Burns · Saurav Kadavath · Akul Arora · Steven Basart · Eric Tang · Dawn Song · Jacob Steinhardt -
2022 : The Interpolated MVU Mechanism For Communication-efficient Private Federated Learning »
Chuan Guo · Kamalika Chaudhuri · Pierre STOCK · Mike Rabbat -
2022 : Forgetting Data from Pre-trained GANs »
Zhifeng Kong · Kamalika Chaudhuri -
2022 : Are Neurons Actually Collapsed? On the Fine-Grained Structure in Neural Representations »
Yongyi Yang · Jacob Steinhardt · Wei Hu -
2022 : Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small »
Kevin Wang · Alexandre Variengien · Arthur Conmy · Buck Shlegeris · Jacob Steinhardt -
2022 : Panel Discussion »
Kamalika Chaudhuri · Been Kim · Dorsa Sadigh · Huan Zhang · Linyi Li -
2022 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2022 Workshop: Workshop on Machine Learning Safety »
Dan Hendrycks · Victoria Krakovna · Dawn Song · Jacob Steinhardt · Nicholas Carlini -
2022 Poster: Increasing Confidence in Adversarial Robustness Evaluations »
Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini -
2022 Poster: How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios »
Mantas Mazeika · Eric Tang · Andy Zou · Steven Basart · Jun Shern Chan · Dawn Song · David Forsyth · Jacob Steinhardt · Dan Hendrycks -
2022 Poster: Capturing Failures of Large Language Models via Human Cognitive Biases »
Erik Jones · Jacob Steinhardt -
2022 Poster: Forecasting Future World Events With Neural Networks »
Andy Zou · Tristan Xiao · Ryan Jia · Joe Kwon · Mantas Mazeika · Richard Li · Dawn Song · Jacob Steinhardt · Owain Evans · Dan Hendrycks -
2022 Poster: The Privacy Onion Effect: Memorization is Relative »
Nicholas Carlini · Matthew Jagielski · Chiyuan Zhang · Nicolas Papernot · Andreas Terzis · Florian Tramer -
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2021 Workshop: Privacy in Machine Learning (PriML) 2021 »
Yu-Xiang Wang · Borja Balle · Giovanni Cherubin · Kamalika Chaudhuri · Antti Honkela · Jonathan Lebensold · Casey Meehan · Mi Jung Park · Adrian Weller · Yuqing Zhu -
2021 Poster: Antipodes of Label Differential Privacy: PATE and ALIBI »
Mani Malek Esmaeili · Ilya Mironov · Karthik Prasad · Igor Shilov · Florian Tramer -
2021 Poster: Grounding Representation Similarity Through Statistical Testing »
Frances Ding · Jean-Stanislas Denain · Jacob Steinhardt -
2021 Poster: Understanding Instance-based Interpretability of Variational Auto-Encoders »
Zhifeng Kong · Kamalika Chaudhuri -
2021 Poster: Relaxing Local Robustness »
Klas Leino · Matt Fredrikson -
2021 Poster: Learning Equilibria in Matching Markets from Bandit Feedback »
Meena Jagadeesan · Alexander Wei · Yixin Wang · Michael Jordan · Jacob Steinhardt -
2021 Poster: Consistent Non-Parametric Methods for Maximizing Robustness »
Robi Bhattacharjee · Kamalika Chaudhuri -
2020 Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition »
Borja Balle · James Bell · AurĂ©lien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller -
2020 Poster: Smoothed Geometry for Robust Attribution »
Zifan Wang · Haofan Wang · Shakul Ramkumar · Piotr Mardziel · Matt Fredrikson · Anupam Datta -
2020 Poster: On Adaptive Attacks to Adversarial Example Defenses »
Florian Tramer · Nicholas Carlini · Wieland Brendel · Aleksander Madry -
2020 Poster: A Closer Look at Accuracy vs. Robustness »
Yao-Yuan Yang · Cyrus Rashtchian · Hongyang Zhang · Russ Salakhutdinov · Kamalika Chaudhuri -
2019 : Audrey Durand, Douwe Kiela, Kamalika Chaudhuri moderated by Yann Dauphin »
Audrey Durand · Kamalika Chaudhuri · Yann Dauphin · Orhan Firat · Dilan Gorur · Douwe Kiela -
2019 : Kamalika Chaudhuri - A Three Sample Test to Detect Data Copying in Generative Models »
Kamalika Chaudhuri -
2019 Workshop: Privacy in Machine Learning (PriML) »
Borja Balle · Kamalika Chaudhuri · Antti Honkela · Antti Koskela · Casey Meehan · Mi Jung Park · Mary Anne Smart · Mary Anne Smart · Adrian Weller -
2019 Poster: The Label Complexity of Active Learning from Observational Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2019 Poster: Adversarial Training and Robustness for Multiple Perturbations »
Florian Tramer · Dan Boneh -
2019 Spotlight: Adversarial Training and Robustness for Multiple Perturbations »
Florian Tramer · Dan Boneh -
2019 Poster: Capacity Bounded Differential Privacy »
Kamalika Chaudhuri · Jacob Imola · Ashwin Machanavajjhala -
2018 : Contributed talk 6: Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware »
Florian Tramer -
2018 : Invited talk 3: Challenges in the Privacy-Preserving Analysis of Structured Data »
Kamalika Chaudhuri -
2018 : Plenary Talk 2 »
Kamalika Chaudhuri -
2018 Poster: Hunting for Discriminatory Proxies in Linear Regression Models »
Samuel Yeom · Anupam Datta · Matt Fredrikson -
2018 Poster: Semidefinite relaxations for certifying robustness to adversarial examples »
Aditi Raghunathan · Jacob Steinhardt · Percy Liang -
2017 Workshop: Aligned Artificial Intelligence »
Dylan Hadfield-Menell · Jacob Steinhardt · David Duvenaud · David Krueger · Anca Dragan -
2017 : Analyzing Robustness of Nearest Neighbors to Adversarial Examples »
Kamalika Chaudhuri -
2017 Workshop: Machine Learning and Computer Security »
Jacob Steinhardt · Nicolas Papernot · Bo Li · Chang Liu · Percy Liang · Dawn Song -
2017 Poster: Renyi Differential Privacy Mechanisms for Posterior Sampling »
Joseph Geumlek · Shuang Song · Kamalika Chaudhuri -
2017 Poster: Approximation and Convergence Properties of Generative Adversarial Learning »
Shuang Liu · Olivier Bousquet · Kamalika Chaudhuri -
2017 Spotlight: Approximation and Convergence Properties of Generative Adversarial Learning »
Shuang Liu · Olivier Bousquet · Kamalika Chaudhuri -
2017 Poster: Certified Defenses for Data Poisoning Attacks »
Jacob Steinhardt · Pang Wei Koh · Percy Liang -
2017 Tutorial: Differentially Private Machine Learning: Theory, Algorithms and Applications »
Kamalika Chaudhuri · Anand D Sarwate -
2016 : Opening Remarks »
Jacob Steinhardt -
2016 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang -
2016 Poster: Active Learning from Imperfect Labelers »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2015 : Kamalika Chaudhuri »
Kamalika Chaudhuri -
2015 Workshop: Non-convex Optimization for Machine Learning: Theory and Practice »
Anima Anandkumar · Niranjan Uma Naresh · Kamalika Chaudhuri · Percy Liang · Sewoong Oh -
2015 Poster: Active Learning from Weak and Strong Labelers »
Chicheng Zhang · Kamalika Chaudhuri -
2015 Poster: Spectral Learning of Large Structured HMMs for Comparative Epigenomics »
Chicheng Zhang · Jimin Song · Kamalika Chaudhuri · Kevin Chen -
2015 Poster: Convergence Rates of Active Learning for Maximum Likelihood Estimation »
Kamalika Chaudhuri · Sham Kakade · Praneeth Netrapalli · Sujay Sanghavi -
2015 Poster: Learning with Relaxed Supervision »
Jacob Steinhardt · Percy Liang -
2014 Poster: Beyond Disagreement-Based Agnostic Active Learning »
Chicheng Zhang · Kamalika Chaudhuri -
2014 Poster: Rates of Convergence for Nearest Neighbor Classification »
Kamalika Chaudhuri · Sanjoy Dasgupta -
2014 Spotlight: Beyond Disagreement-Based Agnostic Active Learning »
Chicheng Zhang · Kamalika Chaudhuri -
2014 Spotlight: Rates of Convergence for Nearest Neighbor Classification »
Kamalika Chaudhuri · Sanjoy Dasgupta -
2014 Poster: The Large Margin Mechanism for Differentially Private Maximization »
Kamalika Chaudhuri · Daniel Hsu · Shuang Song -
2013 Poster: A Stability-based Validation Procedure for Differentially Private Machine Learning »
Kamalika Chaudhuri · Staal A Vinterbo -
2012 Poster: Near-optimal Differentially Private Principal Components »
Kamalika Chaudhuri · Anand D Sarwate · Kaushik Sinha -
2011 Poster: Spectral Methods for Learning Multivariate Latent Tree Structure »
Anima Anandkumar · Kamalika Chaudhuri · Daniel Hsu · Sham M Kakade · Le Song · Tong Zhang -
2010 Poster: Rates of convergence for the cluster tree »
Kamalika Chaudhuri · Sanjoy Dasgupta -
2009 Poster: A Parameter-free Hedging Algorithm »
Kamalika Chaudhuri · Yoav Freund · Daniel Hsu -
2008 Poster: Privacy-preserving logistic regression »
Kamalika Chaudhuri · Claire Monteleoni