This one day workshop focuses on privacy preserving techniques for machine learning and disclosure in large scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). There is growing interest from the Machine Learning (ML) community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches listed below. Additionally, given the tension between the adoption of machine learning technologies and ethical, technical and regulatory issues about privacy, as highlighted during the COVID-19 pandemic, we invite submissions for the special track on this topic.
Welcome & Introduction (Live Intro) | |
Invited Talk #1: Reza Shokri (National University of Singapore) (Invited Talk) | |
Invited Talk #2: Katrina Ligett (Hebrew University) (Invited Talk) | |
Invited Talk Q&A with Reza and Katrina (Q&A Session) | |
Break | |
Contributed Talk #1: POSEIDON: Privacy-Preserving Federated Neural Network Learning (Oral) | |
Contributed Talk Q&A (Q&A Session) | |
Poster Session & Social on Gather.Town (Poster Session) | |
Welcome & Introduction (Live Intro) | |
Invited Talk #3: Carmela Troncoso (EPFL) (Invited Talk) | |
Invited Talk #4: Dan Boneh (Stanford University) (Invited Talk) | |
Invited Talk Q&A with Carmela and Dan (Q&A Session) | |
Break | |
Poster Session & Social on Gather.Town (Poster Session) | |
Break | |
Contributed Talk #2: On the (Im)Possibility of Private Machine Learning through Instance Encoding (Oral) | |
Contributed Talk #3: Poirot: Private Contact Summary Aggregation (Oral) | |
Contributed Talk #4: Greenwoods: A Practical Random Forest Framework for Privacy Preserving Training and Prediction (Oral) | |
Contributed Talks Q&A (Q&A Session) | |
Break | |
Contributed Talk #5: Shuffled Model of Federated Learning: Privacy, Accuracy, and Communication Trade-offs (Oral) | |
Contributed Talk #6: Sample-efficient proper PAC learning with approximate differential privacy (Oral) | |
Contributed Talk #7: Training Production Language Models without Memorizing User Data (Oral) | |
Contributed Talks Q&A (Q&A Session) | |
Tight Approximate Differential Privacy for Discrete-Valued Mechanisms Using FFT (Poster) | |
Data-oblivious training for XGBoost models (Poster) | |
Privacy Attacks on Machine Unlearning (Poster) | |
SOTERIA: In Search of Efficient Neural Networks for Private Inference (Poster) | |
On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians (Poster) | |
Robust and Private Learning of Halfspaces (Poster) | |
Randomness Beyond Noise: Differentially Private Optimization Improvement through Mixup (Poster) | |
Generative Adversarial User Privacy in Lossy Single-Server Information Retrieval (Poster) | |
Privacy Preserving Chatbot Conversations (Poster) | |
Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties (Poster) | |
Twinify: A software package for differentially private data release (Poster) | |
Secure Medical Image Analysis with CrypTFlow (Poster) | |
Individual Privacy Accounting via a Rényi Filter (Poster) | |
Does Domain Generalization Provide Inherent Membership Privacy (Poster) | |
Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling (Poster) | |
SparkFHE: Distributed Dataflow Framework with Fully Homomorphic Encryption (Poster) | |
Enabling Fast Differentially Private SGD via Static Graph Compilation and Batch-Level Parallelism (Poster) | |
Local Differentially Private Regret Minimization in Reinforcement Learning (Poster) | |
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning (Poster) | |
Differentially Private Stochastic Coordinate Descent (Poster) | |
MP2ML: A Mixed-Protocol Machine LearningFramework for Private Inference (Poster) | |
Privacy-preserving XGBoost Inference (Poster) | |
Differentially Private Bayesian Inference For GLMs (Poster) | |
Asymmetric Private Set Intersection with Applications to Contact Tracing and Private Vertical Federated Machine Learning (Poster) | |
Optimal Client Sampling for Federated Learning (Poster) | |
Data Appraisal Without Data Sharing (Poster) | |
Mitigating Leakage in Federated Learning with Trusted Hardware (Poster) | |
Unifying Privacy Loss for Data Analytics (Poster) | |
Differentially Private Generative Models Through Optimal Transport (Poster) | |
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference (Poster) | |
PrivAttack: A Membership Inference AttackFramework Against Deep Reinforcement LearningAgents (Poster) | |
Quantifying Privacy Leakage in Graph Embedding (Poster) | |
Network Generation with Differential Privacy (Poster) | |
Differentially private cross-silo federated learning (Poster) | |
Effectiveness of MPC-friendly Softmax Replacement (Poster) | |
Towards General-purpose Infrastructure for Protecting Scientific Data Under Study (Poster) | |
DAMS: Meta-estimation of private sketch data structures for differentially private contact tracing (Poster) | |
Multi-Headed Global Model for handling Non-IID data (Poster) | |
Robustness Threats of Differential Privacy (Poster) | |
Dynamic Channel Pruning for Privacy (Poster) | |
Challenges of Differentially Private Prediction in Healthcare Settings (Poster) | |
Machine Learning with Membership Privacy via Knowledge Transfer (Poster) | |
Revisiting Membership Inference Under Realistic Assumptions (Poster) | |
CrypTen: Secure Multi-Party Computation Meets Machine Learning (Poster) | |
On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks (Poster) | |
Dataset Inference: Ownership Resolution in Machine Learning (Poster) | |
New Challenges for Fully Homomorphic Encryption (Poster) | |
Privacy in Multi-armed Bandits: Fundamental Definitions and Lower Bounds on Regret (Poster) | |
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead (Poster) | |
Characterizing Private Clipped Gradient Descent on Convex Generalized Linear Problems (Poster) | |
Adversarial Attacks and Countermeasures on Private Training in MPC (Poster) | |
DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks (Poster) | |
Fairness in the Eyes of the Data: Certifying Machine-Learning Models (Poster) | |
Accuracy, Interpretability and Differential Privacy via Explainable Boosting (Poster) | |
Privacy Amplification by Decentralization (Poster) | |
Privacy Risks in Embedded Deep Learning (Poster) | |
Understanding Unintended Memorization in Federated Learning (Poster) | |
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models (Poster) | |