Timezone: »
Differentially private learning has seen limited success for deep learning models of text, resulting in a perception that differential privacy may be incompatible with the language model fine-tuning paradigm. We demonstrate that this perception is inaccurate and that with the right setup, high performing private models can be learned on moderately-sized corpora by directly fine-tuning with differentially private optimization.Our work highlights the important role of hyperparameters, task formulations, and pretrained models.Our analyses also show that the low performance of naive differentially private baselines in prior work is attributable to suboptimal choices in these factors.Empirical results reveal that differentially private optimization does not suffer from dimension-dependent performance degradation with pretrained models and achieves performance on-par with state-of-the-art private training procedures and strong non-private baselines.
Author Information
Xuechen (Chen) Li (Stanford University)
Florian Tramer (Google)
Percy Liang (Stanford University)

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
Tatsunori Hashimoto (Stanford)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Tue. Dec 14th 08:45 -- 09:00 PM Room
More from the Same Authors
-
2021 : Ensembles and Cocktails: Robust Finetuning for Natural Language Generation »
John Hewitt · Xiang Li · Sang Michael Xie · Benjamin Newman · Percy Liang -
2021 : Calibrated Ensembles: A Simple Way to Mitigate ID-OOD Accuracy Tradeoffs »
Ananya Kumar · Aditi Raghunathan · Tengyu Ma · Percy Liang -
2021 : How Does Contrastive Pre-training Connect Disparate Domains? »
Kendrick Shen · Robert Jones · Ananya Kumar · Sang Michael Xie · Percy Liang -
2021 : Extending the WILDS Benchmark for Unsupervised Adaptation »
Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Ian Stavness · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang -
2021 : Is Importance Weighting Incompatible with Interpolating Classifiers? »
Ke Alexander Wang · Niladri Chatterji · Saminul Haque · Tatsunori Hashimoto -
2022 : A Closer Look at the Calibration of Differential Private Learners »
Hanlin Zhang · Xuechen (Chen) Li · Prithviraj Sen · Salim Roukos · Tatsunori Hashimoto -
2023 Poster: Students Parrot Their Teachers: Membership Inference on Model Distillation »
Matthew Jagielski · Milad Nasr · Katherine Lee · Christopher Choquette-Choo · Nicholas Carlini · Florian Tramer -
2023 Poster: Are aligned neural networks adversarially aligned? »
Nicholas Carlini · Florian Tramer · Daphne Ippolito · Ludwig Schmidt · Milad Nasr · Matthew Jagielski · Pang Wei Koh · Irena Gao · Christopher Choquette-Choo -
2023 Poster: Counterfactual Memorization in Neural Language Models »
Chiyuan Zhang · Daphne Ippolito · Katherine Lee · Matthew Jagielski · Florian Tramer · Nicholas Carlini -
2023 Poster: AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback »
Yann Dubois · Xuechen Li · Rohan Taori · Tianyi Zhang · Ishaan Gulrajani · Jimmy Ba · Carlos Guestrin · Percy Liang · Tatsunori Hashimoto -
2023 Oral: Students Parrot Their Teachers: Membership Inference on Model Distillation »
Matthew Jagielski · Milad Nasr · Katherine Lee · Christopher Choquette-Choo · Nicholas Carlini · Florian Tramer -
2022 Poster: When Does Differentially Private Learning Not Suffer in High Dimensions? »
Xuechen Li · Daogao Liu · Tatsunori Hashimoto · Huseyin A. Inan · Janardhan Kulkarni · Yin-Tat Lee · Abhradeep Guha Thakurta -
2022 Poster: Increasing Confidence in Adversarial Robustness Evaluations »
Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini -
2022 Poster: The Privacy Onion Effect: Memorization is Relative »
Nicholas Carlini · Matthew Jagielski · Chiyuan Zhang · Nicolas Papernot · Andreas Terzis · Florian Tramer -
2021 : Is Importance Weighting Incompatible with Interpolating Classifiers? »
Ke Alexander Wang · Niladri Chatterji · Saminul Haque · Tatsunori Hashimoto -
2021 Poster: Antipodes of Label Differential Privacy: PATE and ALIBI »
Mani Malek Esmaeili · Ilya Mironov · Karthik Prasad · Igor Shilov · Florian Tramer -
2021 Poster: Efficient and Accurate Gradients for Neural SDEs »
Patrick Kidger · James Foster · Xuechen (Chen) Li · Terry Lyons -
2020 Poster: On Adaptive Attacks to Adversarial Example Defenses »
Florian Tramer · Nicholas Carlini · Wieland Brendel · Aleksander Madry -
2019 Poster: Adversarial Training and Robustness for Multiple Perturbations »
Florian Tramer · Dan Boneh -
2019 Spotlight: Adversarial Training and Robustness for Multiple Perturbations »
Florian Tramer · Dan Boneh -
2019 Poster: Unlabeled Data Improves Adversarial Robustness »
Yair Carmon · Aditi Raghunathan · Ludwig Schmidt · John Duchi · Percy Liang -
2019 Poster: Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond »
Xuechen (Chen) Li · Denny Wu · Lester Mackey · Murat Erdogdu -
2019 Spotlight: Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond »
Xuechen (Chen) Li · Denny Wu · Lester Mackey · Murat Erdogdu -
2018 : Contributed talk 6: Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware »
Florian Tramer -
2018 Workshop: Workshop on Security in Machine Learning »
Nicolas Papernot · Jacob Steinhardt · Matt Fredrikson · Kamalika Chaudhuri · Florian Tramer -
2018 Poster: Isolating Sources of Disentanglement in Variational Autoencoders »
Tian Qi Chen · Xuechen (Chen) Li · Roger Grosse · David Duvenaud -
2018 Oral: Isolating Sources of Disentanglement in Variational Autoencoders »
Tian Qi Chen · Xuechen (Chen) Li · Roger Grosse · David Duvenaud -
2018 Poster: A Retrieve-and-Edit Framework for Predicting Structured Outputs »
Tatsunori Hashimoto · Kelvin Guu · Yonatan Oren · Percy Liang -
2018 Oral: A Retrieve-and-Edit Framework for Predicting Structured Outputs »
Tatsunori Hashimoto · Kelvin Guu · Yonatan Oren · Percy Liang -
2017 Poster: Unsupervised Transformation Learning via Convex Relaxations »
Tatsunori Hashimoto · Percy Liang · John Duchi