Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 08 05:00 AM -- 03:30 PM (PST) @ Room 512 CDGH
Privacy Preserving Machine Learning
Adria Gascon · AurĂ©lien Bellet · Niki Kilbertus · Olga Ohrimenko · Mariana Raykova · Adrian Weller





Workshop Home Page

Website

Description

This one day workshop focuses on privacy preserving techniques for training, inference, and disclosure in large scale data analysis, both in the distributed and centralized settings. We have observed increasing interest of the ML community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches, including:

- secure multi-party computation techniques for ML
- homomorphic encryption techniques for ML
- hardware-based approaches to privacy preserving ML
- centralized and decentralized protocols for learning on encrypted data
- differential privacy: theory, applications, and implementations
- statistical notions of privacy including relaxations of differential privacy
- empirical and theoretical comparisons between different notions of privacy
- trade-offs between privacy and utility

We think it will be very valuable to have a forum to unify different perspectives and start a discussion about the relative merits of each approach. The workshop will also serve as a venue for networking people from different communities interested in this problem, and hopefully foster fruitful long-term collaboration.

Welcom and introduction (Introduction)
Invited talk 1: Scalable PATE and the Secret Sharer (Talk)
Invited talk 2: Machine Learning and Cryptography: Challenges and Opportunities (Talk)
Coffee Break 1 (Break)
Contributed talk 1: Privacy Amplification by Iteration (Talk)
Contributed talk 2: Subsampled Renyi Differential Privacy and Analytical Moments Accountant (Talk)
Contributed talk 3: The Power of The Hybrid Model for Mean Estimation (Talk)
Contributed talk 4: Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity (Talk)
Lunch Break (Break)
Invited talk 3: Challenges in the Privacy-Preserving Analysis of Structured Data (Talk)
Invited talk 4: Models for private data analysis of distributed data (Talk)
Coffee Break 2 (Break)
Contributed talk 5: DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning (Talk)
Contributed talk 6: Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware (Talk)
Contributed talk 7: Secure Two Party Distribution Testing (Talk)
Contributed talk 8: Private Machine Learning in TensorFlow using Secure Computation (Talk)
Spotlight talks (Spotlights)
Poster Session
Wrap up