Timezone: »
The goal of our workshop is to bring together privacy experts working in academia and industry to discuss the present and the future of privacy-aware technologies powered by machine learning. The workshop will focus on the technical aspects of privacy research and deployment with invited and contributed talks by distinguished researchers in the area. The programme of the workshop will emphasize the diversity of points of view on the problem of privacy. We will also ensure there is ample time for discussions that encourage networking between researches, which should result in mutually beneficial new long-term collaborations.
Sat 8:10 a.m. - 8:15 a.m.
|
Opening
|
🔗 |
Sat 8:15 a.m. - 9:05 a.m.
|
Privacy for Federated Learning, and Federated Learning for Privacy
(
Invited talk
)
|
Brendan McMahan 🔗 |
Sat 9:05 a.m. - 9:25 a.m.
|
Gaussian Differential Privacy
(
Contributed talk
)
Differential privacy has seen remarkable success as a rigorous and practical formalization of data privacy in the past decade. This privacy definition and its divergence based relaxations, however, have several acknowledged weaknesses, either in handling composition of private algorithms or in analyzing important primitives like privacy amplification by subsampling. Inspired by the hypothesis testing formulation of privacy, this paper proposes a new relaxation, which we term ❝f-differential privacy❞ (f-DP). This notion of privacy has a number of appealing properties and, in particular, avoids difficulties associated with divergence based relaxations. First, f-DP preserves the hypothesis testing interpretation. In addition, f-DP allows for lossless reasoning about composition in an algebraic fashion. Moreover, we provide a powerful technique to import existing results proven for original DP to f-DP and, as an application, obtain a simple subsampling theorem for f-DP. In addition to the above findings, we introduce a canonical single-parameter family of privacy notions within the f-DP class that is referred to as ❝Gaussian differential privacy❞ (GDP), defined based on testing two shifted Gaussians. GDP is focal among the f-DP class because of a central limit theorem we prove. More precisely, the privacy guarantees of any hypothesis testing based definition of privacy (including original DP) converges to GDP in the limit under composition. The CLT also yields a computationally inexpensive tool for analyzing the exact composition of private algorithms. Taken together, this collection of attractive properties render f-DP a mathematically coherent, analytically tractable, and versatile framework for private data analysis. Finally, we demonstrate the use of the tools we develop by giving an improved privacy analysis of noisy stochastic gradient descent. |
Jinshuo Dong · Aaron Roth 🔗 |
Sat 9:25 a.m. - 9:45 a.m.
|
QUOTIENT: Two-Party Secure Neural Network Training & Prediction
(
Contributed talk
)
Recently, there has been a wealth of effort devoted to the design of secure protocols for machine learning tasks. Much of this is aimed at enabling secure prediction from highly-accurate Deep Neural Networks (DNNs). However, as DNNs are trained on data, a key question is how such models can be also trained securely. The few prior works on secure DNN training have focused either on designing custom protocols for existing training algorithms or on developing tailored training algorithms and then applying generic secure protocols. In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts. We present QUOTIENT, a new method for discretized training of DNNs, along with a customized secure two-party protocol for it. QUOTIENT incorporates key components of state-of-the-art DNN training such as layer normalization and adaptive gradient methods, and improves upon the state-of-the-art in DNN training in two-party computation. Compared to prior work, we obtain an improvement of 50X in WAN time and 6% in absolute accuracy. |
Nitin Agrawal · Matt Kusner · Adria Gascon 🔗 |
Sat 9:45 a.m. - 10:30 a.m.
|
Coffee break
|
🔗 |
Sat 10:30 a.m. - 11:20 a.m.
|
Fair Decision Making using Privacy-Protected Data
(
Invited talk
)
Data collected about individuals is regularly used to make decisions that impact those same individuals. We consider settings where sensitive personal data is used to decide who will receive resources or benefits. While it is well known that there is a tradeoff between protecting privacy and the accuracy of decisions, in this talk, I will describe our recent work on a first-of-its-kind empirical study into the impact of formally private mechanisms (based on differential privacy) on fair and equitable decision-making. |
Ashwin Machanavajjhala 🔗 |
Sat 11:20 a.m. - 11:30 a.m.
|
Spotlight talks
1. [Jonathan Lebensold, William Hamilton, Borja Balle and Doina Precup] Actor Critic with Differentially Private Critic (#08)
2. [Andres Munoz, Umar Syed, Sergei Vassilvitskii and Ellen Vitercik] Private linear programming without constraint violations (#17)
3. [Ios Kotsogiannis, Yuchao Tao, Xi He, Ashwin Machanavajjhala, Michael Hay and Gerome Miklau] PrivateSQL: A Differentially Private SQL Query Engine (#27)
4. [Amrita Roy Chowdhury, Chenghong Wang, Xi He, Ashwin Machanavajjhala and Somesh Jha] Crypt$\epsilon$: Crypto-Assisted Differential Privacy on Untrusted Servers (#31)
5. [Jiaming Xu and Dana Yang] Optimal Query Complexity of Private Sequential Learning (#32)
6. [Hsiang Hsu, Shahab Asoodeh and Flavio Calmon] Discovering Information-Leaking Samples and Features (#43)
7. [Martine De Cock, Rafael Dowsley, Anderson Nascimento, Davis Railsback, Jianwei Shen and Ariel Todoki] Fast Secure Logistic Regression for High Dimensional Gene Data (#44)
8. [Giuseppe Vietri, Grace Tian, Mark Bun, Thomas Steinke and Steven Wu] New Oracle-Efficient Algorithms for Private Synthetic Data Release (#45)
|
🔗 |
Sat 11:30 a.m. - 12:30 p.m.
|
Poster Session
|
Clement Canonne · Kwang-Sung Jun · Seth Neel · Di Wang · Giuseppe Vietri · Liwei Song · Jonathan Lebensold · Huanyu Zhang · Lovedeep Gondara · Ang Li · FatemehSadat Mireshghallah · Jinshuo Dong · Anand D Sarwate · Antti Koskela · Joonas Jälkö · Matt Kusner · Dingfan Chen · Mi Jung Park · Ashwin Machanavajjhala · Jayashree Kalpathy-Cramer · · Vitaly Feldman · Andrew Tomkins · Hai Phan · Hossein Esfandiari · Mimansa Jaiswal · Mrinank Sharma · Jeff Druce · Casey Meehan · Zhengli Zhao · Hsiang Hsu · Davis Railsback · Abraham Flaxman · · Julius Adebayo · Aleksandra Korolova · Jiaming Xu · Naoise Holohan · Samyadeep Basu · Matthew Joseph · My Thai · Xiaoqian Yang · Ellen Vitercik · Michael Hutchinson · Chenghong Wang · Gregory Yauney · Yuchao Tao · Chao Jin · Si Kai Lee · Audra McMillan · Rauf Izmailov · Jiayi Guo · Siddharth Swaroop · Tribhuvanesh Orekondy · Hadi Esmaeilzadeh · Kevin Procopio · Alkis Polyzotis · Jafar Mohammadi · Nitin Agrawal
|
Sat 12:30 p.m. - 2:00 p.m.
|
Lunch break
|
🔗 |
Sat 2:00 p.m. - 2:50 p.m.
|
Fair Universal Representations via Generative Models and Model Auditing Guarantees
(
Invited talk
)
There is a growing demand for ML methods that limit inappropriate use of protected information to avoid both disparate treatment and disparate impact. In this talk, we present Generative Adversarial rePresentations (GAP) as a data-driven framework that leverages recent advancements in adversarial learning to allow a data holder to learn universal representations that decouple a set of sensitive attributes from the rest of the dataset while allowing learning multiple downstream tasks. We will briefly highlight the theoretical and practical results of GAP. In the second half of the talk we will focus on model auditing. Privacy concerns have led to the development of privacy-preserving approaches for learning models from sensitive data. Yet, in practice, models (even those learned with privacy guarantees) can inadvertently memorize unique training examples or leak sensitive features. To identify such privacy violations, existing model auditing techniques use finite adversaries defined as machine learning models with (a) access to some finite side information (e.g., a small auditing dataset), and (b) finite capacity (e.g., a fixed neural network architecture). In the second half of the talk, we present requirements under which an unsuccessful attempt to identify privacy violations by a finite adversary implies that no stronger adversary can succeed at such a task. We will do so via parameters that quantify the capabilities of the finite adversary, including the size of the neural network employed by such an adversary and the amount of side information it has access to as well as the regularity of the (perhaps privacy-guaranteeing) audited model. |
Lalitha Sankar 🔗 |
Sat 2:50 p.m. - 3:10 p.m.
|
Pan-Private Uniformity Testing
(
Contributed talk
)
A centrally differentially private algorithm maps raw data to differentially private outputs. In contrast, a locally differentially private algorithm may only access data through public interaction with data holders, and this interaction must be a differentially private function of the data. We study the intermediate model of pan-privacy. Unlike a locally private algorithm, a pan-private algorithm receives data in the clear. Unlike a centrally private algorithm, the algorithm receives data one element at a time and must maintain a differentially private internal state while processing this stream. First, we show that pan-privacy against multiple intrusions on the internal state is equivalent to sequentially interactive local privacy. Next, we contextualize pan-privacy against a single intrusion by analyzing the sample complexity of uniformity testing over domain [k]. Focusing on the dependence on k, centrally private uniformity testing has sample complexity Θ(√k), while noninteractive locally private uniformity testing has sample complexity Θ(k). We show that the sample complexity of pan-private uniformity testing is Θ(k2/3). By a new Ω(k) lower bound for the sequentially interactive setting, we also separate pan-private from sequentially interactive locally private and multi-intrusion pan-private uniformity testing. |
Kareem Amin · Matthew Joseph 🔗 |
Sat 3:10 p.m. - 3:30 p.m.
|
Private Stochastic Convex Optimization: Optimal Rates in Linear Time
(
Contributed talk
)
We study differentially private (DP) algorithms for stochastic convex optimization: the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions. A recent work of Bassily et al. (2019) has established the optimal bound on the excess population loss achievable given n samples. Unfortunately, their algorithm achieving this bound is relatively inefficient: it requires O(min{n3/2,n5/2/d}) gradient computations, where d is the dimension of the optimization problem. We describe two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n,n2/d}) gradient computations. In particular, the algorithms match the running time of the optimal non-private algorithms. The first approach relies on the use of variable batch sizes and is analyzed using the privacy amplification by iteration technique of Feldmanet al. (2018). The second approach is based on a general reduction to the problem of localizing an approximately optimal solution with differential privacy. Such localization, in turn, can be achieved using existing (non-private) uniformly stable optimization algorithms. As in the earlier work, our algorithms require a mild smoothness assumption. We also give a linear-time algorithm achieving the optimal bound on the excess loss for the strongly convex case, as well as a faster algorithm for the non-smooth case. |
Vitaly Feldman · Tomer Koren · Kunal Talwar 🔗 |
Sat 3:30 p.m. - 4:15 p.m.
|
Coffee break
|
🔗 |
Sat 4:15 p.m. - 5:05 p.m.
|
Formal Privacy At Scale: The 2020 Decennial Census TopDown Disclosure Limitation Algorithm
(
Invited talk
)
To control vulnerabilities to reconstruction-abetted re-identification attacks that leverage massive external data stores and cheap computation, the U.S. Census Bureau has elected to adopt a formally private approach to disclosure limitation in the 2020 Decennial Census of Population and Housing. To this end, a team of disclosure limitation specialists have worked over the past 3 years to design and implement the U.S. Census Bureau TopDown Disclosure Limitation Algorithm (TDA). This formally private algorithm generates Persons and Households micro-data, which will then be tabulated to produce the final set of demographic statistics published as a result of the 2020 Census enumeration. In this talk, I outline the main features of TDA, describe the current iteration of the process used to help policy makers decide how to set and allocate privacy-loss budget, and outline known issues with - and intended fixes for - the current implementation of TDA. |
Philip Leclerc 🔗 |
Sat 5:05 p.m. - 5:55 p.m.
|
Panel Discussion
(
Discussion Panel
)
|
🔗 |
Sat 5:55 p.m. - 6:00 p.m.
|
Closing
|
🔗 |
Author Information
Borja Balle (Amazon)
Kamalika Chaudhuri (UCSD)
Antti Honkela (University of Helsinki)
Antti Koskela (University of Helsinki)
Casey Meehan (University of California, San Diego)
Mi Jung Park (MPI-IS Tuebingen)
Mary Anne Smart (UC San Diego)
Mary Anne Smart (University of California, San Diego)
Adrian Weller (Cambridge, Alan Turing Institute)
Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, where he is also a Turing Fellow leading work on safe and ethical AI. He is a Principal Research Fellow in Machine Learning at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards including the Centre for Data Ethics and Innovation. Previously, Adrian held senior roles in finance.
More from the Same Authors
-
2020 : Tight Approximate Differential Privacy for Discrete-Valued Mechanisms Using FFT »
Antti Koskela -
2021 : Reconstructing Training Data with Informed Adversaries »
Borja Balle · Giovanni Cherubin · Jamie Hayes -
2021 : Differentially Private Hamiltonian Monte Carlo »
Ossi Räisä · Antti Koskela · Antti Honkela -
2021 : Tight Accounting in the Shuffle Model of Differential Privacy »
Antti Koskela · Mikko Heikkilä · Antti Honkela -
2022 Poster: Scalable Infomin Learning »
Yanzhi Chen · weihao sun · Yingzhen Li · Adrian Weller -
2022 : The Interpolated MVU Mechanism For Communication-efficient Private Federated Learning »
Chuan Guo · Kamalika Chaudhuri · Pierre STOCK · Mike Rabbat -
2022 : Forgetting Data from Pre-trained GANs »
Zhifeng Kong · Kamalika Chaudhuri -
2022 : Individual Privacy Accounting with Gaussian Differential Privacy »
Antti Koskela · Marlon Tobaben · Antti Honkela -
2022 : Conformal Prediction for Resource Prioritisation in Predicting Rare and Dangerous Outcomes »
Varun Babbar · Umang Bhatt · Miri Zilka · Adrian Weller -
2022 : Panel Discussion »
Kamalika Chaudhuri · Been Kim · Dorsa Sadigh · Huan Zhang · Linyi Li -
2022 : Panel on Privacy and Security in Machine Learning Systems »
Graham Cormode · Borja Balle · Yu-Xiang Wang · Alejandro Saucedo · Neil Lawrence -
2022 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2022 : Noise-Aware Statistical Inference with Differentially Private Synthetic Data »
Ossi Räisä · Joonas Jälkö · Antti Honkela · Samuel Kaski -
2022 Poster: Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off »
Mateo Espinosa Zarlenga · Pietro Barbiero · Gabriele Ciravegna · Giuseppe Marra · Francesco Giannini · Michelangelo Diligenti · Zohreh Shams · Frederic Precioso · Stefano Melacci · Adrian Weller · Pietro Lió · Mateja Jamnik -
2022 Poster: Chefs' Random Tables: Non-Trigonometric Random Features »
Valerii Likhosherstov · Krzysztof M Choromanski · Kumar Avinava Dubey · Frederick Liu · Tamas Sarlos · Adrian Weller -
2022 Poster: A Survey and Datasheet Repository of Publicly Available US Criminal Justice Datasets »
Miri Zilka · Bradley Butcher · Adrian Weller -
2021 Workshop: Privacy in Machine Learning (PriML) 2021 »
Yu-Xiang Wang · Borja Balle · Giovanni Cherubin · Kamalika Chaudhuri · Antti Honkela · Jonathan Lebensold · Casey Meehan · Mi Jung Park · Adrian Weller · Yuqing Zhu -
2021 : Ethics:: Addressing Privacy Threats from Machine Learning »
Mary Anne Smart -
2021 Workshop: Human Centered AI »
Michael Muller · Plamen P Angelov · Shion Guha · Marina Kogan · Gina Neff · Nuria Oliver · Manuel Rodriguez · Adrian Weller -
2021 Workshop: AI for Science: Mind the Gaps »
Payal Chandak · Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Gabriel Spadon · Max Tegmark · Hanchen Wang · Adrian Weller · Max Welling · Marinka Zitnik -
2021 Poster: Understanding Instance-based Interpretability of Variational Auto-Encoders »
Zhifeng Kong · Kamalika Chaudhuri -
2021 Affinity Workshop: Queer in AI Workshop 2 »
Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Juan Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray -
2021 Poster: Consistent Non-Parametric Methods for Maximizing Robustness »
Robi Bhattacharjee · Kamalika Chaudhuri -
2021 Affinity Workshop: Queer in AI Workshop 1 »
Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Juan Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray -
2020 Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition »
Borja Balle · James Bell · Aurélien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller -
2020 Poster: Ode to an ODE »
Krzysztof Choromanski · Jared Quincy Davis · Valerii Likhosherstov · Xingyou Song · Jean-Jacques Slotine · Jacob Varley · Honglak Lee · Adrian Weller · Vikas Sindhwani -
2020 Poster: A Closer Look at Accuracy vs. Robustness »
Yao-Yuan Yang · Cyrus Rashtchian · Hongyang Zhang · Russ Salakhutdinov · Kamalika Chaudhuri -
2020 Poster: Privacy Amplification via Random Check-Ins »
Borja Balle · Peter Kairouz · Brendan McMahan · Om Thakkar · Abhradeep Guha Thakurta -
2019 : Audrey Durand, Douwe Kiela, Kamalika Chaudhuri moderated by Yann Dauphin »
Audrey Durand · Kamalika Chaudhuri · Yann Dauphin · Orhan Firat · Dilan Gorur · Douwe Kiela -
2019 : Kamalika Chaudhuri - A Three Sample Test to Detect Data Copying in Generative Models »
Kamalika Chaudhuri -
2019 : Poster Session »
Clement Canonne · Kwang-Sung Jun · Seth Neel · Di Wang · Giuseppe Vietri · Liwei Song · Jonathan Lebensold · Huanyu Zhang · Lovedeep Gondara · Ang Li · FatemehSadat Mireshghallah · Jinshuo Dong · Anand D Sarwate · Antti Koskela · Joonas Jälkö · Matt Kusner · Dingfan Chen · Mi Jung Park · Ashwin Machanavajjhala · Jayashree Kalpathy-Cramer · · Vitaly Feldman · Andrew Tomkins · Hai Phan · Hossein Esfandiari · Mimansa Jaiswal · Mrinank Sharma · Jeff Druce · Casey Meehan · Zhengli Zhao · Hsiang Hsu · Davis Railsback · Abraham Flaxman · · Julius Adebayo · Aleksandra Korolova · Jiaming Xu · Naoise Holohan · Samyadeep Basu · Matthew Joseph · My Thai · Xiaoqian Yang · Ellen Vitercik · Michael Hutchinson · Chenghong Wang · Gregory Yauney · Yuchao Tao · Chao Jin · Si Kai Lee · Audra McMillan · Rauf Izmailov · Jiayi Guo · Siddharth Swaroop · Tribhuvanesh Orekondy · Hadi Esmaeilzadeh · Kevin Procopio · Alkis Polyzotis · Jafar Mohammadi · Nitin Agrawal -
2019 : Poster Session »
Jonathan Scarlett · Piotr Indyk · Ali Vakilian · Adrian Weller · Partha P Mitra · Benjamin Aubin · Bruno Loureiro · Florent Krzakala · Lenka Zdeborová · Kristina Monakhova · Joshua Yurtsever · Laura Waller · Hendrik Sommerhoff · Michael Moeller · Rushil Anirudh · Shuang Qiu · Xiaohan Wei · Zhuoran Yang · Jayaraman Thiagarajan · Salman Asif · Michael Gillhofer · Johannes Brandstetter · Sepp Hochreiter · Felix Petersen · Dhruv Patel · Assad Oberai · Akshay Kamath · Sushrut Karmalkar · Eric Price · Ali Ahmed · Zahra Kadkhodaie · Sreyas Mohan · Eero Simoncelli · Carlos Fernandez-Granda · Oscar Leong · Wesam Sakla · Rebecca Willett · Stephan Hoyer · Jascha Sohl-Dickstein · Sam Greydanus · Gauri Jagatap · Chinmay Hegde · Michael Kellman · Jonathan Tamir · Nouamane Laanait · Ousmane Dia · Mirco Ravanelli · Jonathan Binas · Negar Rostamzadeh · Shirin Jalali · Tiantian Fang · Alex Schwing · Sébastien Lachapelle · Philippe Brouillard · Tristan Deleu · Simon Lacoste-Julien · Stella Yu · Arya Mazumdar · Ankit Singh Rawat · Yue Zhao · Jianshu Chen · Xiaoyang Li · Hubert Ramsauer · Gabrio Rizzuti · Nikolaos Mitsakos · Dingzhou Cao · Thomas Strohmer · Yang Li · Pei Peng · Gregory Ongie -
2019 Workshop: Workshop on Human-Centric Machine Learning »
Plamen P Angelov · Nuria Oliver · Adrian Weller · Manuel Rodriguez · Isabel Valera · Silvia Chiappa · Hoda Heidari · Niki Kilbertus -
2019 Poster: Privacy Amplification by Mixing and Diffusion Mechanisms »
Borja Balle · Gilles Barthe · Marco Gaboardi · Joseph Geumlek -
2019 Poster: Differentially Private Markov Chain Monte Carlo »
Mikko Heikkilä · Joonas Jälkö · Onur Dikmen · Antti Honkela -
2019 Spotlight: Differentially Private Markov Chain Monte Carlo »
Mikko Heikkilä · Joonas Jälkö · Onur Dikmen · Antti Honkela -
2019 Poster: The Label Complexity of Active Learning from Observational Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2019 Poster: Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models »
Yunfei Teng · Wenbo Gao · François Chalus · Anna Choromanska · Donald Goldfarb · Adrian Weller -
2019 Poster: Capacity Bounded Differential Privacy »
Kamalika Chaudhuri · Jacob Imola · Ashwin Machanavajjhala -
2018 : Poster Session »
Phillipp Schoppmann · Patrick Yu · Valerie Chen · Travis Dick · Marc Joye · Ningshan Zhang · Frederik Harder · Olli Saarikivi · Théo Ryffel · Yunhui Long · Théo JOURDAN · Di Wang · Antonio Marcedone · Negev Shekel Nosatzki · Yatharth A Dubey · Antti Koskela · Peter Bloem · Aleksandra Korolova · Martin Bertran · Hao Chen · Galen Andrew · Natalia Martinez · Janardhan Kulkarni · Jonathan Passerat-Palmbach · Guillermo Sapiro · Amrita Roy Chowdhury -
2018 : Invited talk 3: Challenges in the Privacy-Preserving Analysis of Structured Data »
Kamalika Chaudhuri -
2018 : Plenary Talk 2 »
Kamalika Chaudhuri -
2018 Workshop: Machine Learning Open Source Software 2018: Sustainable communities »
Heiko Strathmann · Viktor Gal · Ryan Curtin · Antti Honkela · Sergey Lisitsyn · Cheng Soon Ong -
2018 Workshop: Privacy Preserving Machine Learning »
Adria Gascon · Aurélien Bellet · Niki Kilbertus · Olga Ohrimenko · Mariana Raykova · Adrian Weller -
2018 Workshop: Workshop on Security in Machine Learning »
Nicolas Papernot · Jacob Steinhardt · Matt Fredrikson · Kamalika Chaudhuri · Florian Tramer -
2018 Poster: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2018 Spotlight: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2018 Poster: Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences »
Borja Balle · Gilles Barthe · Marco Gaboardi -
2017 : Invited talk: Differential privacy and Bayesian learning »
Antti Honkela -
2017 : Poster Session (encompasses coffee break) »
Beidi Chen · Borja Balle · Daniel Lee · iuri frosio · Jitendra Malik · Jan Kautz · Ke Li · Masashi Sugiyama · Miguel A. Carreira-Perpinan · Ramin Raziperchikolaei · Theja Tulabandhula · Yung-Kyun Noh · Adams Wei Yu -
2017 : Invited talk: Challenges for Transparency »
Adrian Weller -
2017 : Analyzing Robustness of Nearest Neighbors to Adversarial Examples »
Kamalika Chaudhuri -
2017 : Closing remarks »
Adrian Weller -
2017 Symposium: Kinds of intelligence: types, tests and meeting the needs of society »
José Hernández-Orallo · Zoubin Ghahramani · Tomaso Poggio · Adrian Weller · Matthew Crosby -
2017 Poster: From Parity to Preference-based Notions of Fairness in Classification »
Muhammad Bilal Zafar · Isabel Valera · Manuel Rodriguez · Krishna Gummadi · Adrian Weller -
2017 Poster: Renyi Differential Privacy Mechanisms for Posterior Sampling »
Joseph Geumlek · Shuang Song · Kamalika Chaudhuri -
2017 Poster: Approximation and Convergence Properties of Generative Adversarial Learning »
Shuang Liu · Olivier Bousquet · Kamalika Chaudhuri -
2017 Spotlight: Approximation and Convergence Properties of Generative Adversarial Learning »
Shuang Liu · Olivier Bousquet · Kamalika Chaudhuri -
2017 Poster: The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings »
Krzysztof Choromanski · Mark Rowland · Adrian Weller -
2017 Poster: Uprooting and Rerooting Higher-Order Graphical Models »
Mark Rowland · Adrian Weller -
2017 Poster: Hierarchical Methods of Moments »
Matteo Ruffini · Guillaume Rabusseau · Borja Balle -
2017 Poster: Multitask Spectral Learning of Weighted Automata »
Guillaume Rabusseau · Borja Balle · Joelle Pineau -
2017 Poster: Differentially private Bayesian learning on distributed data »
Mikko Heikkilä · Eemil Lagerspetz · Samuel Kaski · Kana Shimizu · Sasu Tarkoma · Antti Honkela -
2017 Tutorial: Differentially Private Machine Learning: Theory, Algorithms and Applications »
Kamalika Chaudhuri · Anand D Sarwate -
2016 Workshop: Private Multi-Party Machine Learning »
Borja Balle · Aurélien Bellet · David Evans · Adrià Gascón -
2016 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang -
2016 Symposium: Machine Learning and the Law »
Adrian Weller · Thomas D. Grant · Conrad McDonnell · Jatinder Singh -
2016 Poster: Active Learning from Imperfect Labelers »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2015 : Genome-wide modelling of transcription kinetics reveals patterns of RNA production delays »
Antti Honkela -
2015 : Kamalika Chaudhuri »
Kamalika Chaudhuri -
2015 Workshop: Non-convex Optimization for Machine Learning: Theory and Practice »
Anima Anandkumar · Niranjan Uma Naresh · Kamalika Chaudhuri · Percy Liang · Sewoong Oh -
2015 Symposium: Algorithms Among Us: the Societal Impacts of Machine Learning »
Michael A Osborne · Adrian Weller · Murray Shanahan -
2015 Poster: Active Learning from Weak and Strong Labelers »
Chicheng Zhang · Kamalika Chaudhuri -
2015 Poster: Spectral Learning of Large Structured HMMs for Comparative Epigenomics »
Chicheng Zhang · Jimin Song · Kamalika Chaudhuri · Kevin Chen -
2015 Poster: Convergence Rates of Active Learning for Maximum Likelihood Estimation »
Kamalika Chaudhuri · Sham Kakade · Praneeth Netrapalli · Sujay Sanghavi -
2014 Poster: Clamping Variables and Approximate Inference »
Adrian Weller · Tony Jebara -
2014 Oral: Clamping Variables and Approximate Inference »
Adrian Weller · Tony Jebara -
2014 Poster: Beyond Disagreement-Based Agnostic Active Learning »
Chicheng Zhang · Kamalika Chaudhuri -
2014 Poster: Rates of Convergence for Nearest Neighbor Classification »
Kamalika Chaudhuri · Sanjoy Dasgupta -
2014 Spotlight: Beyond Disagreement-Based Agnostic Active Learning »
Chicheng Zhang · Kamalika Chaudhuri -
2014 Spotlight: Rates of Convergence for Nearest Neighbor Classification »
Kamalika Chaudhuri · Sanjoy Dasgupta -
2014 Poster: The Large Margin Mechanism for Differentially Private Maximization »
Kamalika Chaudhuri · Daniel Hsu · Shuang Song -
2013 Workshop: Machine Learning Open Source Software: Towards Open Workflows »
Antti Honkela · Cheng Soon Ong -
2013 Poster: A Stability-based Validation Procedure for Differentially Private Machine Learning »
Kamalika Chaudhuri · Staal A Vinterbo -
2012 Poster: Near-optimal Differentially Private Principal Components »
Kamalika Chaudhuri · Anand D Sarwate · Kaushik Sinha -
2011 Poster: Spectral Methods for Learning Multivariate Latent Tree Structure »
Anima Anandkumar · Kamalika Chaudhuri · Daniel Hsu · Sham M Kakade · Le Song · Tong Zhang -
2010 Poster: Rates of convergence for the cluster tree »
Kamalika Chaudhuri · Sanjoy Dasgupta -
2009 Poster: A Parameter-free Hedging Algorithm »
Kamalika Chaudhuri · Yoav Freund · Daniel Hsu -
2008 Poster: Privacy-preserving logistic regression »
Kamalika Chaudhuri · Claire Monteleoni