Timezone: »

 
A Closer Look at the Calibration of Differential Private Learners
Hanlin Zhang · Xuechen (Chen) Li · Prithviraj Sen · Salim Roukos · Tatsunori Hashimoto

We systematically study the calibration of classifiers trained with differentially private stochastic gradient descent (DP-SGD) and observe miscalibration across a wide range of vision and language tasks. Our analysis identifies per-example gradient clipping in DP-SGD as a major cause of miscalibration, and we show that existing baselines for improving private calibration only provide small improvements in calibration error while occasionally causing large degradation in accuracy. As a solution, we show that differentially private variants of post-processing calibration methods such as temperature calibration and Platt scaling are surprisingly effective and have negligible utility cost to the overall model. Across 7 tasks, temperature calibration and Platt scaling with DP-SGD result in an average 55-fold reduction in the expected calibration error and only incurs an up to 1.59 percent drop in accuracy.

Author Information

Hanlin Zhang (School of Computer Science, Carnegie Mellon University)
Xuechen (Chen) Li (Stanford University)
Prithviraj Sen (IBM Almaden Research Center)
Salim Roukos (IBM)

Salim Roukos, IBM Fellow, working on multilingual NLP using Machine (and Deep) Learning models for language translation, information extraction, and language understanding.

Tatsunori Hashimoto (Stanford)

More from the Same Authors