Timezone: »
Poster
Privately Learning Subspaces
Vikrant Singhal · Thomas Steinke
Private data analysis suffers a costly curse of dimensionality. However, the data often has an underlying low-dimensional structure. For example, when optimizing via gradient descent, the gradients often lie in or near a low-dimensional subspace. If that low-dimensional structure can be identified, then we can avoid paying (in terms of privacy or accuracy) for the high ambient dimension. We present differentially private algorithms that take input data sampled from a low-dimensional linear subspace (possibly with a small amount of error) and output that subspace (or an approximation to it). These algorithms can serve as a pre-processing step for other procedures.
Author Information
Vikrant Singhal (University of Waterloo)
Thomas Steinke (Google Research)
More from the Same Authors
-
2022 Poster: New Lower Bounds for Private Estimation and a Generalized Fingerprinting Lemma »
Gautam Kamath · Argyris Mouzakis · Vikrant Singhal -
2022 Poster: Private Estimation with Public Data »
Alex Bie · Gautam Kamath · Vikrant Singhal -
2020 Poster: The Discrete Gaussian for Differential Privacy »
ClĂ©ment L Canonne · Gautam Kamath · Thomas Steinke -
2019 Poster: Private Hypothesis Selection »
Mark Bun · Gautam Kamath · Thomas Steinke · Steven Wu -
2019 Poster: Average-Case Averages: Private Algorithms for Smooth Sensitivity and Mean Estimation »
Mark Bun · Thomas Steinke