( events)   Timezone: »  
Fri Dec 11 01:20 AM -- 01:25 PM (PST)
Privacy Preserving Machine Learning - PriML and PPML Joint Edition
Borja Balle · James Bell · Aurélien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller

This one day workshop focuses on privacy preserving techniques for machine learning and disclosure in large scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). There is growing interest from the Machine Learning (ML) community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches listed below. Additionally, given the tension between the adoption of machine learning technologies and ethical, technical and regulatory issues about privacy, as highlighted during the COVID-19 pandemic, we invite submissions for the special track on this topic.

Welcome & Introduction (Live Intro)
Invited Talk #1: Reza Shokri (National University of Singapore) (Invited Talk)
Invited Talk #2: Katrina Ligett (Hebrew University) (Invited Talk)
Invited Talk Q&A with Reza and Katrina (Q&A Session)
Contributed Talk #1: POSEIDON: Privacy-Preserving Federated Neural Network Learning (Oral)
Contributed Talk Q&A (Q&A Session)
Poster Session & Social on Gather.Town (Poster Session)
Welcome & Introduction (Live Intro)
Invited Talk #3: Carmela Troncoso (EPFL) (Invited Talk)
Invited Talk #4: Dan Boneh (Stanford University) (Invited Talk)
Invited Talk Q&A with Carmela and Dan (Q&A Session)
Poster Session & Social on Gather.Town (Poster Session)
Contributed Talk #2: On the (Im)Possibility of Private Machine Learning through Instance Encoding (Oral)
Contributed Talk #3: Poirot: Private Contact Summary Aggregation (Oral)
Contributed Talk #4: Greenwoods: A Practical Random Forest Framework for Privacy Preserving Training and Prediction (Oral)
Contributed Talks Q&A (Q&A Session)
Contributed Talk #5: Shuffled Model of Federated Learning: Privacy, Accuracy, and Communication Trade-offs (Oral)
Contributed Talk #6: Sample-efficient proper PAC learning with approximate differential privacy (Oral)
Contributed Talk #7: Training Production Language Models without Memorizing User Data (Oral)
Contributed Talks Q&A (Q&A Session)
Enabling Fast Differentially Private SGD via Static Graph Compilation and Batch-Level Parallelism (Poster)
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning (Poster)
Asymmetric Private Set Intersection with Applications to Contact Tracing and Private Vertical Federated Machine Learning (Poster)
Machine Learning with Membership Privacy via Knowledge Transfer (Poster)
Quantifying Privacy Leakage in Graph Embedding (Poster)
Adversarial Attacks and Countermeasures on Private Training in MPC (Poster)
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models (Poster)
PrivAttack: A Membership Inference AttackFramework Against Deep Reinforcement LearningAgents (Poster)
Multi-Headed Global Model for handling Non-IID data (Poster)
Mitigating Leakage in Federated Learning with Trusted Hardware (Poster)
Dynamic Channel Pruning for Privacy (Poster)
Privacy in Multi-armed Bandits: Fundamental Definitions and Lower Bounds on Regret (Poster)
Privacy Amplification by Decentralization (Poster)
Accuracy, Interpretability and Differential Privacy via Explainable Boosting (Poster)
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference (Poster)
Privacy Preserving Chatbot Conversations (Poster)
Unifying Privacy Loss for Data Analytics (Poster)
Individual Privacy Accounting via a Rényi Filter (Poster)
Does Domain Generalization Provide Inherent Membership Privacy (Poster)
Generative Adversarial User Privacy in Lossy Single-Server Information Retrieval (Poster)
Robustness Threats of Differential Privacy (Poster)
Secure Medical Image Analysis with CrypTFlow (Poster)
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead (Poster)
Privacy Attacks on Machine Unlearning (Poster)
DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks (Poster)
Characterizing Private Clipped Gradient Descent on Convex Generalized Linear Problems (Poster)
Data Appraisal Without Data Sharing (Poster)
Network Generation with Differential Privacy (Poster)
Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling (Poster)
Understanding Unintended Memorization in Federated Learning (Poster)
Differentially Private Stochastic Coordinate Descent (Poster)
New Challenges for Fully Homomorphic Encryption (Poster)
Effectiveness of MPC-friendly Softmax Replacement (Poster)
Differentially private cross-silo federated learning (Poster)
Revisiting Membership Inference Under Realistic Assumptions (Poster)
Robust and Private Learning of Halfspaces (Poster)
Optimal Client Sampling for Federated Learning (Poster)
DAMS: Meta-estimation of private sketch data structures for differentially private contact tracing (Poster)
Twinify: A software package for differentially private data release (Poster)
Fairness in the Eyes of the Data: Certifying Machine-Learning Models (Poster)
CrypTen: Secure Multi-Party Computation Meets Machine Learning (Poster)
Randomness Beyond Noise: Differentially Private Optimization Improvement through Mixup (Poster)
Challenges of Differentially Private Prediction in Healthcare Settings (Poster)
On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians (Poster)
Differentially Private Generative Models Through Optimal Transport (Poster)
Differentially Private Bayesian Inference For GLMs (Poster)
Tight Approximate Differential Privacy for Discrete-Valued Mechanisms Using FFT (Poster)
Privacy-preserving XGBoost Inference (Poster)
Local Differentially Private Regret Minimization in Reinforcement Learning (Poster)
SOTERIA: In Search of Efficient Neural Networks for Private Inference (Poster)
Data-oblivious training for XGBoost models (Poster)
Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties (Poster)
SparkFHE: Distributed Dataflow Framework with Fully Homomorphic Encryption (Poster)
Towards General-purpose Infrastructure for Protecting Scientific Data Under Study (Poster)
Dataset Inference: Ownership Resolution in Machine Learning (Poster)
On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks (Poster)
Privacy Risks in Embedded Deep Learning (Poster)
MP2ML: A Mixed-Protocol Machine LearningFramework for Private Inference (Poster)