Timezone: »
Poster
Algorithms that Approximate Data Removal: New Results and Limitations
Vinith Suriyakumar · Ashia Wilson
We study the problem of deleting user data from machine learning models trained using empirical risk minimization (ERM). Our focus is on learning algorithms which return the empirical risk minimizer and approximate unlearning algorithms that comply with deletion requests that come in an online manner. Leveraging the infintesimal jacknife, we develop an online unlearning algorithm that is both computationally and memory efficient. Unlike prior memory efficient unlearning algorithms, we target ERM trained models that minimize objectives with non-smooth regularizers, such as the commonly used $\ell_1$, elastic net, or nuclear norm penalties. We also provide generalization, deletion capacity, and unlearning guarantees that are consistent with state of the art methods. Across a variety of benchmark datasets, our algorithm empirically improves upon the runtime of prior methods while maintaining the same memory requirements and test accuracy. Finally, we open a new direction of inquiry by proving that all approximate unlearning algorithms introduced so far fail to unlearn in problem settings where common hyperparameter tuning methods, such as cross-validation, have been used to select models.
Author Information
Vinith Suriyakumar (Massachusetts Institute of Technology)
Ashia Wilson (MIT)
More from the Same Authors
-
2022 : Sufficient conditions for non-asymptotic convergence of Riemannian optimization methods »
Vishwak Srinivasan · Ashia Wilson -
2022 : When Personalization Harms: Reconsidering the Use of Group Attributes of Prediction »
Vinith Suriyakumar · Marzyeh Ghassemi · Berk Ustun -
2022 : Sufficient Conditions for Non-asymptotic Convergence of Riemannian Optimization Methods »
Vishwak Srinivasan · Ashia Wilson -
2022 Workshop: Robustness in Sequence Modeling »
Nathan Ng · Haoran Zhang · Vinith Suriyakumar · Chantal Shaib · Kyunghyun Cho · Yixuan Li · Alice Oh · Marzyeh Ghassemi