Timezone: »
Title: Differentially Private Learning with Margin Guarantees
Abstract:
Preserving privacy is a crucial objective for machine learning algorithms. But, despite the remarkable theoretical and algorithmic progress in differential privacy over the last decade or more, its application to learning still faces several obstacles.
A recent series of publications have shown that differentially private PAC learning of infinite hypothesis sets is not possible, even for common hypothesis sets such as that of linear functions. Another rich body of literature has studied differentially private empirical risk minimization in a constrained optimization setting and shown that the guarantees are necessarily dimension-dependent. In the unconstrained setting, dimension-independent bounds have been given, but they admit a dependency on the norm of a vector that can be extremely large, which makes them uninformative.
These results raise some fundamental questions about private learning with common high-dimensional problems: is differentially private learning with favorable (dimension-independent) guarantees possible for standard hypothesis sets?
This talk presents a series of new differentially private algorithms for learning linear classifiers, kernel classifiers, and neural-network classifiers with dimension-independent, confidence-margin guarantees.
Joint work with Raef Bassily and Ananda Theertha Suresh.
Author Information
Mehryar Mohri (Google Research & Courant Institute of Mathematical Sciences)
Mehryar Mohri is a Professor of Computer Science and Mathematics at the Courant Institute of Mathematical Sciences and a Research Consultant at Google. Prior to these positions, he spent about ten years at AT&T Bell Labs, later AT&T Labs-Research, where he served for several years as a Department Head and a Technology Leader. His research interests cover a number of different areas: primarily machine learning, algorithms and theory, automata theory, speech processing, natural language processing, and also computational biology. His research in learning theory and algorithms has been used in a variety of applications. His work on automata theory and algorithms has served as the foundation for several applications in language processing, with several of his algorithms used in virtually all spoken-dialog and speech recognitions systems used in the United States. He has co-authored several software libraries widely used in research and academic labs. He is also co-author of the machine learning textbook Foundations of Machine Learning used in graduate courses on machine learning in several universities and corporate research laboratories.
More from the Same Authors
-
2019 : Mehryar Mohri, "Learning with Sample-Dependent Hypothesis Sets" »
Mehryar Mohri -
2017 : Mehryar Mohri (NYU) on Tight Learning Bounds for Multi-Class Classification »
Mehryar Mohri -
2017 : (Invited Talk) Mehryar Mohri: Regret minimization against strategic buyers. »
Mehryar Mohri -
2017 Poster: Discriminative State Space Models »
Vitaly Kuznetsov · Mehryar Mohri -
2017 Poster: Online Learning with Transductive Regret »
Scott Yang · Mehryar Mohri -
2017 Poster: Parameter-Free Online Learning via Model Selection »
Dylan J Foster · Satyen Kale · Mehryar Mohri · Karthik Sridharan -
2017 Spotlight: Parameter-Free Online Learning via Model Selection »
Dylan J Foster · Satyen Kale · Mehryar Mohri · Karthik Sridharan -
2017 Spotlight: Online Learning with Transductive Regret »
Scott Yang · Mehryar Mohri -
2016 Tutorial: Theory and Algorithms for Forecasting Non-Stationary Time Series »
Vitaly Kuznetsov · Mehryar Mohri -
2015 Poster: Revenue Optimization against Strategic Buyers »
Mehryar Mohri · Andres Munoz -
2015 Poster: Learning Theory and Algorithms for Forecasting Non-stationary Time Series »
Vitaly Kuznetsov · Mehryar Mohri -
2015 Oral: Learning Theory and Algorithms for Forecasting Non-stationary Time Series »
Vitaly Kuznetsov · Mehryar Mohri