Timezone: »
Website
Description
This one day workshop focuses on privacy preserving techniques for training, inference, and disclosure in large scale data analysis, both in the distributed and centralized settings. We have observed increasing interest of the ML community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches, including:
- secure multi-party computation techniques for ML
- homomorphic encryption techniques for ML
- hardware-based approaches to privacy preserving ML
- centralized and decentralized protocols for learning on encrypted data
- differential privacy: theory, applications, and implementations
- statistical notions of privacy including relaxations of differential privacy
- empirical and theoretical comparisons between different notions of privacy
- trade-offs between privacy and utility
We think it will be very valuable to have a forum to unify different perspectives and start a discussion about the relative merits of each approach. The workshop will also serve as a venue for networking people from different communities interested in this problem, and hopefully foster fruitful long-term collaboration.
Sat 5:30 a.m. - 5:50 a.m.
|
Welcom and introduction
(
Introduction
)
|
🔗 |
Sat 5:50 a.m. - 6:40 a.m.
|
Invited talk 1: Scalable PATE and the Secret Sharer
(
Talk
)
|
Ian Goodfellow 🔗 |
Sat 6:40 a.m. - 7:30 a.m.
|
Invited talk 2: Machine Learning and Cryptography: Challenges and Opportunities
(
Talk
)
At the mid eighties researchers in computational learning theory pointed the way to examples of hard learning tasks such as learning parity with noise (LPN) and learning with errors (LWE) which have been extremely useful for building sophisticated cryptographic primitives such as homomorphic encryption which are unbreakable if LPN and LWE are hard to learn. Today, with the rise of machine learning algorithms that use large amounts of data to come up with procedures which have the potential to replace human decision processes, cryptography, in turn, stands to provide machine learning, tools for keeping data private during both training and inference phases of ML and to provide methods to verify adherence of models with data. These promise to be important steps in ensuring the safe transition of power from human to algorithmic decision making. |
Shafi Goldwasser 🔗 |
Sat 7:30 a.m. - 8:00 a.m.
|
Coffee Break 1
|
🔗 |
Sat 8:00 a.m. - 8:15 a.m.
|
Contributed talk 1: Privacy Amplification by Iteration
(
Talk
)
Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees. We describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent. For example, we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization. In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied. |
Vitaly Feldman 🔗 |
Sat 8:15 a.m. - 8:30 a.m.
|
Contributed talk 2: Subsampled Renyi Differential Privacy and Analytical Moments Accountant
(
Talk
)
We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms. Specifically, we provide a tight upper bound on the Renyi Differential Privacy (RDP) parameters for algorithms that: (1) subsample the dataset, and then (2) applies a randomized mechanism M to the subsample, in terms of the RDP parameters of M and the subsampling probability parameter. Our results generalize the moments accounting technique, developed by Abadi et al. [CCS'16] for the Gaussian mechanism, to any subsampled RDP mechanism. |
Yu-Xiang Wang 🔗 |
Sat 8:30 a.m. - 8:45 a.m.
|
Contributed talk 3: The Power of The Hybrid Model for Mean Estimation
(
Talk
)
In this work we explore the power of the hybrid model of differential privacy (DP) proposed in~\cite{blender}, where some users desire the guarantees of the local model of DP and others are content with receiving the trusted curator model guarantees. In particular, we study the accuracy of mean estimation algorithms for arbitrary distributions in bounded support. We show that a hybrid mechanism which combines the sample mean estimates obtained from the two groups in an optimally weighted convex combination performs a constant factor better for a wide range of sample sizes than natural benchmarks. We analyze how this improvement factor is parameterized by the problem setting and how it varies with sample size. |
Yatharth A Dubey 🔗 |
Sat 8:45 a.m. - 9:00 a.m.
|
Contributed talk 4: Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity
(
Talk
)
Sensitive statistics are often collected across sets of users, with repeated collection of reports done over time. For example, trends in users' private preferences or software usage may be monitored via such reports. We study the collection of such statistics in the local differential privacy (LDP) model, when each user's value may change only a small number of times. We describe an algorithm whose privacy cost is polylogarithmic in the number of times the statistic is collected.
More fundamentally---by building on anonymity of the users' reports---we also demonstrate how the privacy cost of our LDP algorithm can actually be much lower when viewed in the central model of differential privacy. We show, via a new and general technique, that any permutation-invariant algorithm satisfying $\varepsilon$-local differential privacy will satisfy $(O(\varepsilon \sqrt{\log \frac 1 \delta} / \sqrt{n}), \delta)$-central differential privacy. In the process, we clarify how the high noise and $\sqrt{n}$ overhead of LDP protocols is a consequence of them being significantly more private in the central model. As a final, practical corollary, our results also imply that industrial deployments of LDP mechanism may have much lower privacy cost than their advertised $\varepsilon$ would indicate---at least if reports are anonymized.
|
Úlfar Erlingsson 🔗 |
Sat 9:00 a.m. - 10:30 a.m.
|
Lunch Break
|
🔗 |
Sat 10:30 a.m. - 11:20 a.m.
|
Invited talk 3: Challenges in the Privacy-Preserving Analysis of Structured Data
(
Talk
)
Much data analysis today is done on sensitive data, and particular privacy challenges arise when this data is sensitive and structured. In this talk I will describe two such challenges in the privacy-preserving analysis of complex, structured data that we have been working on in my group. The first is continual release of graph statistics in an online manner from an expanding graph, which is motivated by a problem in HIV epidemiology. Even though node differentially private release of graph statistics is highly challenging, here we will describe how we can get a differentially private solution for this problem that performs better than the natural sequential composition baseline. Next, I will talk about analysis of sensitive structured, correlated data, while still preserving the privacy of events in the data. Differential privacy does not adequately address privacy issues in this kind of data, and hence will look at a form of inferential privacy, called Pufferfish, that is more appropriate. We will provide mechanisms, establish their composition properties, and finally evaluate them on real data on physical activity measurements across time. |
Kamalika Chaudhuri 🔗 |
Sat 11:20 a.m. - 12:10 p.m.
|
Invited talk 4: Models for private data analysis of distributed data
(
Talk
)
This talk will present a partial survey of the proposed (and implemented) models for private analysis of distributed data, together with some new results. |
Adam Smith 🔗 |
Sat 12:10 p.m. - 12:30 p.m.
|
Coffee Break 2
|
🔗 |
Sat 12:30 p.m. - 12:45 p.m.
|
Contributed talk 5: DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning
(
Talk
)
Developing a differentially private deep learning algorithm is challenging, due to the difficulty in analyzing the sensitivity of objective functions that are typically used to train deep neural networks. Many existing methods resort to the stochastic gradient descent algorithm and apply a pre-defined sensitivity to the gradients for privatizing weights. However, their slow convergence typically yields a high cumulative privacy loss. Here, we take a different route by employing the method of auxiliary coordinates, which allows us to independently update the weights per layer by optimizing a per-layer objective function. This objective function can be well approximated by a low-order Taylor's expansion, in which sensitivity analysis becomes tractable. We perturb the coefficients of the expansion for privacy, which we optimize using more advanced optimization routines than SGD for faster convergence. We empirically show that our algorithm provides a decent trained model quality under a modest privacy budget. |
Frederik Harder 🔗 |
Sat 12:45 p.m. - 1:00 p.m.
|
Contributed talk 6: Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
(
Talk
)
As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by executing DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For two canonical DNNs, VGG16 and MobileNet, we obtain 20× and 6× increases in throughput for verifiable inference, and 11× and 4× for verifiable and private inference. |
Florian Tramer 🔗 |
Sat 1:00 p.m. - 1:15 p.m.
|
Contributed talk 7: Secure Two Party Distribution Testing
(
Talk
)
We study the problem of discrete distribution testing in the two-party setting. For example, in the standard closeness testing problem, Alice and Bob each have t samples from, respectively, distributions a and b over [n], and they need to test whether a=b or a,b are ε-far (in the ℓ1 distance) for some fixed ε>0. This is in contrast to the well-studied one-party case, where the tester has unrestricted access to samples of both distributions, for which optimal bounds are known for a number of variations. Despite being a natural constraint in applications, the two-party setting has evaded attention so far. We address two fundamental aspects of the two-party setting: 1) what is the communication complexity, and 2) can it be accomplished securely, without Alice and Bob learning extra information about each other's input. Besides closeness testing, we also study the independence testing problem, where Alice and Bob have t samples from distributions a and b respectively, which may be correlated; the question is whether a,b are independent of ε-far from being independent. Our contribution is three-fold: ∙ Communication: we show how to gain communication efficiency as we have more samples, beyond the information-theoretic bound on t. Furthermore, the gain is polynomially better than what one may obtain by adapting one-party algorithms. For the closeness testing, our protocol has communication s=Θε(n^2/t^2) as long as t is at least the information-theoretic minimum number of samples. For the independence testing over domain [n]×[m], where n≥m, we obtain s=Oε(n^2m/t^2+nm/t+m^1/2). ∙ Security: we define the concept of secure distribution testing and argue that it must leak at least some minimal information when the promise is not satisfied. We then provide secure versions of the above protocols with an overhead that is only polynomial in the security parameter. ∙ Lower bounds: we prove tightness of our trade-off for the closeness testing, as well as that the independence testing requires tight Ω(m^1/2) communication for unbounded number of samples. These lower bounds are of independent interest as, to the best of our knowledge, these are the first 2-party communication lower bounds for testing problems, where the inputs represent a set of i.i.d. samples. |
Negev Shekel Nosatzki 🔗 |
Sat 1:15 p.m. - 1:30 p.m.
|
Contributed talk 8: Private Machine Learning in TensorFlow using Secure Computation
(
Talk
)
We present a framework for experimenting with secure multi-party computation directly in TensorFlow. By doing so we benefit from several properties valuable to both researchers and practitioners, including tight integration with ordinary machine learning processes, existing optimizations for distributed computation in TensorFlow, high-level abstractions for expressing complex algorithms and protocols, and an expanded set of familiar tooling. We give an open source implementation of a state-of-the-art protocol and report on concrete benchmarks using typical models from private machine learning. |
Morten Dahl 🔗 |
Sat 1:30 p.m. - 2:00 p.m.
|
Spotlight talks
(
Spotlights
)
|
🔗 |
Sat 2:15 p.m. - 3:15 p.m.
|
Poster Session
|
Phillipp Schoppmann · Patrick Yu · Valerie Chen · Travis Dick · Marc Joye · Ningshan Zhang · Frederik Harder · Olli Saarikivi · Théo Ryffel · Yunhui Long · Théo JOURDAN · Di Wang · Antonio Marcedone · Negev Shekel Nosatzki · Yatharth A Dubey · Antti Koskela · Peter Bloem · Aleksandra Korolova · Martin Bertran · Hao Chen · Galen Andrew · Natalia Martinez · Janardhan Kulkarni · Jonathan Passerat-Palmbach · Guillermo Sapiro · Amrita Roy Chowdhury
|
Sat 3:15 p.m. - 3:30 p.m.
|
Wrap up
|
🔗 |
Author Information
Adria Gascon (Alan Turing Institute and Warwick university)
Aurélien Bellet (INRIA)
Niki Kilbertus (MPI Tuebingen & Cambridge)
Olga Ohrimenko (Microsoft Research)
Mariana Raykova (Yale University)
Adrian Weller (University of Cambridge)
Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, where he is also a Turing Fellow leading work on safe and ethical AI. He is a Principal Research Fellow in Machine Learning at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards including the Centre for Data Ethics and Innovation. Previously, Adrian held senior roles in finance.
More from the Same Authors
-
2020 : Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties »
Aurélien Bellet -
2020 : Privacy Amplification by Decentralization »
Aurélien Bellet -
2022 Poster: Scalable Infomin Learning »
Yanzhi Chen · weihao sun · Yingzhen Li · Adrian Weller -
2022 : Discovering ordinary differential equations that govern time-series »
Sören Becker · Michal Klein · Alexander Neitz · Giambattista Parascandolo · Niki Kilbertus -
2022 : Refined Convergence and Topology Learning for Decentralized Optimization with Heterogeneous Data »
Batiste Le bars · Aurélien Bellet · Marc Tommasi · Erick Lavoie · Anne-marie Kermarrec -
2022 : Fairness Certificates for Differentially Private Classification »
Paul Mangold · Michaël Perrot · Marc Tommasi · Aurélien Bellet -
2022 : Modeling Single-Cell Dynamics Using Unbalanced Parameterized Monge Maps »
Luca Eyring · Dominik Klein · Giovanni Palla · Sören Becker · Philipp Weiler · Niki Kilbertus · Fabian Theis -
2022 : Partial identification without distributional assumptions »
Kirtan Padh · Jakob Zeitler · David Watson · Matt Kusner · Ricardo Silva · Niki Kilbertus -
2022 : Conformal Prediction for Resource Prioritisation in Predicting Rare and Dangerous Outcomes »
Varun Babbar · Umang Bhatt · Miri Zilka · Adrian Weller -
2023 Poster: Stabilized Neural Differential Equations for Learning Constrained Dynamics »
Alistair White · Niki Kilbertus · Maximilian Gelbrecht · Niklas Boers -
2023 Poster: Quasi-Monte Carlo Graph Random Features »
Isaac Reid · Adrian Weller · Krzysztof M Choromanski -
2023 Poster: Use perturbations when learning from explanations »
Juyeon Heo · Vihari Piratla · Matthew Wicker · Adrian Weller -
2023 Poster: Dense-Exponential Random Features: Sharp Positive Estimators of the Gaussian Kernel »
Valerii Likhosherstov · Krzysztof M Choromanski · Kumar Avinava Dubey · Frederick Liu · Tamas Sarlos · Adrian Weller -
2023 Poster: Diffused Redundancy in Pre-trained Representations »
Vedant Nanda · Till Speicher · John Dickerson · Krishna Gummadi · Soheil Feizi · Adrian Weller -
2023 Poster: Controlling Text-to-Image Diffusion by Orthogonal Finetuning »
Zeju Qiu · Weiyang Liu · Haiwen Feng · Yuxuan Xue · Yao Feng · Zhen Liu · Dan Zhang · Adrian Weller · Bernhard Schölkopf -
2023 Poster: Certification of Distributional Individual Fairness »
Matthew Wicker · Vihari Piratla · Adrian Weller -
2023 Poster: Learning to Receive Help: Intervention-Aware Concept Embedding Models »
Mateo Espinosa Zarlenga · Katie Collins · Krishnamurthy Dvijotham · Adrian Weller · Zohreh Shams · Mateja Jamnik -
2022 Workshop: A causal view on dynamical systems »
Sören Becker · Alexis Bellot · Cecilia Casolo · Niki Kilbertus · Sara Magliacane · Yuyang (Bernie) Wang -
2022 Poster: Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off »
Mateo Espinosa Zarlenga · Pietro Barbiero · Gabriele Ciravegna · Giuseppe Marra · Francesco Giannini · Michelangelo Diligenti · Zohreh Shams · Frederic Precioso · Stefano Melacci · Adrian Weller · Pietro Lió · Mateja Jamnik -
2022 Poster: Sparsity in Continuous-Depth Neural Networks »
Hananeh Aliee · Till Richter · Mikhail Solonin · Ignacio Ibarra · Fabian Theis · Niki Kilbertus -
2022 Poster: Chefs' Random Tables: Non-Trigonometric Random Features »
Valerii Likhosherstov · Krzysztof M Choromanski · Kumar Avinava Dubey · Frederick Liu · Tamas Sarlos · Adrian Weller -
2022 Poster: A Survey and Datasheet Repository of Publicly Available US Criminal Justice Datasets »
Miri Zilka · Bradley Butcher · Adrian Weller -
2022 Poster: FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings »
Jean Ogier du Terrail · Samy-Safwan Ayed · Edwige Cyffers · Felix Grimberg · Chaoyang He · Regis Loeb · Paul Mangold · Tanguy Marchand · Othmane Marfoq · Erum Mushtaq · Boris Muzellec · Constantin Philippenko · Santiago Silva · Maria Teleńczuk · Shadi Albarqouni · Salman Avestimehr · Aurélien Bellet · Aymeric Dieuleveut · Martin Jaggi · Sai Praneeth Karimireddy · Marco Lorenzi · Giovanni Neglia · Marc Tommasi · Mathieu Andreux -
2022 Poster: Predicting Cellular Responses to Novel Drug Perturbations at a Single-Cell Resolution »
Leon Hetzel · Simon Boehm · Niki Kilbertus · Stephan Günnemann · mohammad lotfollahi · Fabian Theis -
2022 Poster: Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging »
Edwige Cyffers · Mathieu Even · Aurélien Bellet · Laurent Massoulié -
2021 Workshop: Privacy in Machine Learning (PriML) 2021 »
Yu-Xiang Wang · Borja Balle · Giovanni Cherubin · Kamalika Chaudhuri · Antti Honkela · Jonathan Lebensold · Casey Meehan · Mi Jung Park · Adrian Weller · Yuqing Zhu -
2021 Workshop: Human Centered AI »
Michael Muller · Plamen P Angelov · Shion Guha · Marina Kogan · Gina Neff · Nuria Oliver · Manuel Rodriguez · Adrian Weller -
2021 Workshop: Machine Learning Meets Econometrics (MLECON) »
David Bruns-Smith · Arthur Gretton · Limor Gultchin · Niki Kilbertus · Krikamol Muandet · Evan Munro · Angela Zhou -
2021 Workshop: AI for Science: Mind the Gaps »
Payal Chandak · Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Gabriel Spadon · Max Tegmark · Hanchen Wang · Adrian Weller · Max Welling · Marinka Zitnik -
2021 Poster: Federated Multi-Task Learning under a Mixture of Distributions »
Othmane Marfoq · Giovanni Neglia · Aurélien Bellet · Laetitia Kameni · Richard Vidal -
2021 Poster: On Component Interactions in Two-Stage Recommender Systems »
Jiri Hron · Karl Krauth · Michael Jordan · Niki Kilbertus -
2020 Workshop: Consequential Decisions in Dynamic Environments »
Niki Kilbertus · Angela Zhou · Ashia Wilson · John Miller · Lily Hu · Lydia T. Liu · Nathan Kallus · Shira Mitchell -
2020 Workshop: Privacy Preserving Machine Learning - PriML and PPML Joint Edition »
Borja Balle · James Bell · Aurélien Bellet · Kamalika Chaudhuri · Adria Gascon · Antti Honkela · Antti Koskela · Casey Meehan · Olga Ohrimenko · Mi Jung Park · Mariana Raykova · Mary Anne Smart · Yu-Xiang Wang · Adrian Weller -
2020 Poster: A Class of Algorithms for General Instrumental Variable Models »
Niki Kilbertus · Matt Kusner · Ricardo Silva -
2020 Poster: Ode to an ODE »
Krzysztof Choromanski · Jared Quincy Davis · Valerii Likhosherstov · Xingyou Song · Jean-Jacques Slotine · Jacob Varley · Honglak Lee · Adrian Weller · Vikas Sindhwani -
2020 Session: Orals & Spotlights Track 10: Social/Privacy »
Yanan Sui · Aurélien Bellet -
2019 : QUOTIENT: Two-Party Secure Neural Network Training & Prediction »
Nitin Agrawal · Matt Kusner · Adria Gascon -
2019 Workshop: Privacy in Machine Learning (PriML) »
Borja Balle · Kamalika Chaudhuri · Antti Honkela · Antti Koskela · Casey Meehan · Mi Jung Park · Mary Anne Smart · Mary Anne Smart · Adrian Weller -
2019 : Poster Session »
Jonathan Scarlett · Piotr Indyk · Ali Vakilian · Adrian Weller · Partha P Mitra · Benjamin Aubin · Bruno Loureiro · Florent Krzakala · Lenka Zdeborová · Kristina Monakhova · Joshua Yurtsever · Laura Waller · Hendrik Sommerhoff · Michael Moeller · Rushil Anirudh · Shuang Qiu · Xiaohan Wei · Zhuoran Yang · Jayaraman Thiagarajan · Salman Asif · Michael Gillhofer · Johannes Brandstetter · Sepp Hochreiter · Felix Petersen · Dhruv Patel · Assad Oberai · Akshay Kamath · Sushrut Karmalkar · Eric Price · Ali Ahmed · Zahra Kadkhodaie · Sreyas Mohan · Eero Simoncelli · Carlos Fernandez-Granda · Oscar Leong · Wesam Sakla · Rebecca Willett · Stephan Hoyer · Jascha Sohl-Dickstein · Sam Greydanus · Gauri Jagatap · Chinmay Hegde · Michael Kellman · Jonathan Tamir · Nouamane Laanait · Ousmane Dia · Mirco Ravanelli · Jonathan Binas · Negar Rostamzadeh · Shirin Jalali · Tiantian Fang · Alex Schwing · Sébastien Lachapelle · Philippe Brouillard · Tristan Deleu · Simon Lacoste-Julien · Stella Yu · Arya Mazumdar · Ankit Singh Rawat · Yue Zhao · Jianshu Chen · Xiaoyang Li · Hubert Ramsauer · Gabrio Rizzuti · Nikolaos Mitsakos · Dingzhou Cao · Thomas Strohmer · Yang Li · Pei Peng · Gregory Ongie -
2019 Workshop: Workshop on Human-Centric Machine Learning »
Plamen P Angelov · Nuria Oliver · Adrian Weller · Manuel Rodriguez · Isabel Valera · Silvia Chiappa · Hoda Heidari · Niki Kilbertus -
2019 Poster: Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models »
Yunfei Teng · Wenbo Gao · François Chalus · Anna Choromanska · Donald Goldfarb · Adrian Weller -
2018 : Aurélien Bellet »
Aurélien Bellet -
2018 Poster: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2018 Spotlight: Geometrically Coupled Monte Carlo Sampling »
Mark Rowland · Krzysztof Choromanski · François Chalus · Aldo Pacchiano · Tamas Sarlos · Richard Turner · Adrian Weller -
2018 Poster: Contamination Attacks and Mitigation in Multi-Party Machine Learning »
Jamie Hayes · Olga Ohrimenko -
2017 : Personalized and Private Peer-to-Peer Machine Learning »
Aurélien Bellet · Rachid Guerraoui · Marc Tommasi -
2017 : Invited talk: Challenges for Transparency »
Adrian Weller -
2017 : Closing remarks »
Adrian Weller -
2017 Symposium: Kinds of intelligence: types, tests and meeting the needs of society »
José Hernández-Orallo · Zoubin Ghahramani · Tomaso Poggio · Adrian Weller · Matthew Crosby -
2017 Poster: From Parity to Preference-based Notions of Fairness in Classification »
Muhammad Bilal Zafar · Isabel Valera · Manuel Rodriguez · Krishna Gummadi · Adrian Weller -
2017 Poster: Avoiding Discrimination through Causal Reasoning »
Niki Kilbertus · Mateo Rojas Carulla · Giambattista Parascandolo · Moritz Hardt · Dominik Janzing · Bernhard Schölkopf -
2017 Poster: The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings »
Krzysztof Choromanski · Mark Rowland · Adrian Weller -
2017 Poster: Uprooting and Rerooting Higher-Order Graphical Models »
Mark Rowland · Adrian Weller -
2016 Workshop: Private Multi-Party Machine Learning »
Borja Balle · Aurélien Bellet · David Evans · Adrià Gascón -
2016 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang -
2016 Symposium: Machine Learning and the Law »
Adrian Weller · Thomas D. Grant · Conrad McDonnell · Jatinder Singh -
2016 Poster: On Graph Reconstruction via Empirical Risk Minimization: Fast Learning Rates and Scalability »
Guillaume Papa · Aurélien Bellet · Stephan Clémençon -
2015 Symposium: Algorithms Among Us: the Societal Impacts of Machine Learning »
Michael A Osborne · Adrian Weller · Murray Shanahan -
2015 Poster: SGD Algorithms based on Incomplete U-statistics: Large-Scale Minimization of Empirical Risk »
Guillaume Papa · Stéphan Clémençon · Aurélien Bellet -
2015 Poster: Extending Gossip Algorithms to Distributed Estimation of U-statistics »
Igor Colin · Aurélien Bellet · Joseph Salmon · Stéphan Clémençon -
2015 Spotlight: Extending Gossip Algorithms to Distributed Estimation of U-statistics »
Igor Colin · Aurélien Bellet · Joseph Salmon · Stéphan Clémençon -
2014 Poster: Clamping Variables and Approximate Inference »
Adrian Weller · Tony Jebara -
2014 Oral: Clamping Variables and Approximate Inference »
Adrian Weller · Tony Jebara