Timezone: »
Large pretrained models can be fine-tuned with differential privacy to achieve performance approaching that of non-private models. A common theme in these results is the surprising observation that high-dimensional models can achieve favorable privacy-utility trade-offs. This seemingly contradicts known results on the model-size dependence of differentially private convex learning and raises the following research question: When does the performance of differentially private learning not degrade with increasing model size? We identify that the magnitudes of gradients projected onto subspaces is a key factor that determines performance. To precisely characterize this for private convex learning, we introduce a condition on the objective that we term restricted Lipschitz continuity and derive improved bounds for the excess empirical and population risks that are dimension- independent under additional conditions. We empirically show that in private fine-tuning of large language models, gradients obtained during fine-tuning are mostly controlled by a few principal components. This behavior is similar to conditions under which we obtain dimension-independent bounds in convex settings. Our theoretical and empirical results together provide a possible explanation for the recent success of large-scale private fine-tuning. Code to reproduce our results can be found at https://github.com/lxuechen/private-transformers/tree/main/examples/classification/spectral_analysis.
Author Information
Xuechen Li (Stanford University)
Daogao Liu (University of Washington, Seattle)
Tatsunori Hashimoto (Stanford)
Huseyin A. Inan (Microsoft Research)
Janardhan Kulkarni (Microsoft Research)
Yin-Tat Lee
Abhradeep Guha Thakurta (Google Research - Brain Team)
More from the Same Authors
-
2021 Spotlight: Private Non-smooth ERM and SCO in Subquadratic Steps »
Janardhan Kulkarni · Yin Tat Lee · Daogao Liu -
2021 Spotlight: Differentially Private Model Personalization »
Prateek Jain · John Rush · Adam Smith · Shuang Song · Abhradeep Guha Thakurta -
2021 : Membership Inference Attacks Against NLP Classification Models »
Virat Shejwalkar · Huseyin A Inan · Amir Houmansadr · Robert Sim -
2022 : A Closer Look at the Calibration of Differential Private Learners »
Hanlin Zhang · Xuechen (Chen) Li · Prithviraj Sen · Salim Roukos · Tatsunori Hashimoto -
2022 : Out-of-Distribution Robustness via Targeted Augmentations »
Irena Gao · Shiori Sagawa · Pang Wei Koh · Tatsunori Hashimoto · Percy Liang -
2022 : Data Feedback Loops: Model-driven Amplification of Dataset Biases »
Rohan Taori · Tatsunori Hashimoto -
2022 : Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification »
Niladri S. Chatterji · Saminul Haque · Tatsunori Hashimoto -
2022 : Data Feedback Loops: Model-driven Amplification of Dataset Biases »
Rohan Taori · Tatsunori Hashimoto -
2022 Poster: Sampling with Riemannian Hamiltonian Monte Carlo in a Constrained Space »
Yunbum Kook · Yin-Tat Lee · Ruoqi Shen · Santosh Vempala -
2022 Poster: Improved Differential Privacy for SGD via Optimal Private Linear Operators on Adaptive Streams »
Sergey Denisov · H. Brendan McMahan · John Rush · Adam Smith · Abhradeep Guha Thakurta -
2022 Poster: Factored DRO: Factored Distributionally Robust Policies for Contextual Bandits »
Tong Mu · Yash Chandak · Tatsunori Hashimoto · Emma Brunskill -
2022 Poster: Diffusion-LM Improves Controllable Text Generation »
Xiang Li · John Thickstun · Ishaan Gulrajani · Percy Liang · Tatsunori Hashimoto -
2022 Poster: Improving Self-Supervised Learning by Characterizing Idealized Representations »
Yann Dubois · Stefano Ermon · Tatsunori Hashimoto · Percy Liang -
2022 Poster: Differentially Private Model Compression »
FatemehSadat Mireshghallah · Arturs Backurs · Huseyin A. Inan · Lukas Wutschitz · Janardhan Kulkarni -
2021 : Panel: Future directions for tackling distribution shifts »
Tatsunori Hashimoto · Jamie Morgenstern · Judy Hoffman · Andrew Beck -
2021 Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision »
Steven Y. Feng · Dor Arad Hudson · Tatsunori Hashimoto · DONGYEOP Kang · Varun Prashant Gangal · Anusha Balakrishnan · Joel Tetreault -
2021 Poster: Private Non-smooth ERM and SCO in Subquadratic Steps »
Janardhan Kulkarni · Yin Tat Lee · Daogao Liu -
2021 Poster: Differentially Private Model Personalization »
Prateek Jain · John Rush · Adam Smith · Shuang Song · Abhradeep Guha Thakurta -
2021 Poster: Fast and Memory Efficient Differentially Private-SGD via JL Projections »
Zhiqi Bu · Sivakanth Gopi · Janardhan Kulkarni · Yin Tat Lee · Judy Hanwen Shen · Uthaipon Tantipongpipat -
2021 Poster: Differentially Private n-gram Extraction »
Kunho Kim · Sivakanth Gopi · Janardhan Kulkarni · Sergey Yekhanin -
2021 Poster: A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks »
Samuel Deng · Sanjam Garg · Somesh Jha · Saeed Mahloujifar · Mohammad Mahmoody · Abhradeep Guha Thakurta -
2020 Poster: Privacy Amplification via Random Check-Ins »
Borja Balle · Peter Kairouz · Brendan McMahan · Om Thakkar · Abhradeep Guha Thakurta -
2020 Poster: The Flajolet-Martin Sketch Itself Preserves Differential Privacy: Private Counting with Minimal Space »
Adam Smith · Shuang Song · Abhradeep Guha Thakurta -
2019 : Extended Poster Session »
Travis LaCroix · Marie Ossenkopf · Mina Lee · Nicole Fitzgerald · Daniela Mihai · Jonathon Hare · Ali Zaidi · Alexander Cowen-Rivers · Alana Marzoev · Eugene Kharitonov · Luyao Yuan · Tomasz Korbak · Paul Pu Liang · Yi Ren · Roberto Dessì · Peter Potash · Shangmin Guo · Tatsunori Hashimoto · Percy Liang · Julian Zubek · Zipeng Fu · Song-Chun Zhu · Adam Lerer -
2019 Poster: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2019 Spotlight: Complexity of Highly Parallel Non-Smooth Convex Optimization »
Sebastien Bubeck · Qijia Jiang · Yin-Tat Lee · Yuanzhi Li · Aaron Sidford -
2019 Poster: Locally Private Gaussian Estimation »
Matthew Joseph · Janardhan Kulkarni · Jieming Mao · Steven Wu -
2017 Poster: Collecting Telemetry Data Privately »
Bolin Ding · Janardhan Kulkarni · Sergey Yekhanin