Timezone: »

Invited Talk #1, Differentially Private Learning with Margin Guarantees, Mehryar Mohri
Mehryar Mohri

Fri Dec 02 06:15 AM -- 06:40 AM (PST) @

Title: Differentially Private Learning with Margin Guarantees


Preserving privacy is a crucial objective for machine learning algorithms. But, despite the remarkable theoretical and algorithmic progress in differential privacy over the last decade or more, its application to learning still faces several obstacles.

A recent series of publications have shown that differentially private PAC learning of infinite hypothesis sets is not possible, even for common hypothesis sets such as that of linear functions. Another rich body of literature has studied differentially private empirical risk minimization in a constrained optimization setting and shown that the guarantees are necessarily dimension-dependent. In the unconstrained setting, dimension-independent bounds have been given, but they admit a dependency on the norm of a vector that can be extremely large, which makes them uninformative.

These results raise some fundamental questions about private learning with common high-dimensional problems: is differentially private learning with favorable (dimension-independent) guarantees possible for standard hypothesis sets?

This talk presents a series of new differentially private algorithms for learning linear classifiers, kernel classifiers, and neural-network classifiers with dimension-independent, confidence-margin guarantees.

Joint work with Raef Bassily and Ananda Theertha Suresh.

Author Information

Mehryar Mohri (Google Research & Courant Institute of Mathematical Sciences)

Mehryar Mohri is a Professor of Computer Science and Mathematics at the Courant Institute of Mathematical Sciences and a Research Consultant at Google. Prior to these positions, he spent about ten years at AT&T Bell Labs, later AT&T Labs-Research, where he served for several years as a Department Head and a Technology Leader. His research interests cover a number of different areas: primarily machine learning, algorithms and theory, automata theory, speech processing, natural language processing, and also computational biology. His research in learning theory and algorithms has been used in a variety of applications. His work on automata theory and algorithms has served as the foundation for several applications in language processing, with several of his algorithms used in virtually all spoken-dialog and speech recognitions systems used in the United States. He has co-authored several software libraries widely used in research and academic labs. He is also co-author of the machine learning textbook Foundations of Machine Learning used in graduate courses on machine learning in several universities and corporate research laboratories.

More from the Same Authors