Timezone: »

Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization
Zijie Zhang · Yang Zhou · Xin Zhao · Tianshi Che · Lingjuan Lyu

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #919

The right to be forgotten calls for efficient machine unlearning techniques that make trained machine learning models forget a cohort of data. The combination of training and unlearning operations in traditional machine unlearning methods often leads to the expensive computational cost on large-scale data. This paper presents a prompt certified machine unlearning algorithm, PCMU, which executes one-time operation of simultaneous training and unlearning in advance for a series of machine unlearning requests, without the knowledge of the removed/forgotten data. First, we establish a connection between randomized smoothing for certified robustness on classification and randomized smoothing for certified machine unlearning on gradient quantization. Second, we propose a prompt certified machine unlearning model based on randomized data smoothing and gradient quantization. We theoretically derive the certified radius R regarding the data change before and after data removals and the certified budget of data removals about R. Last but not least, we present another practical framework of randomized gradient smoothing and quantization, due to the dilemma of producing high confidence certificates in the first framework. We theoretically demonstrate the certified radius R' regarding the gradient change, the correlation between two types of certified radii, and the certified budget of data removals about R'.

Author Information

Zijie Zhang (Auburn University)
Yang Zhou (Auburn University)
Xin Zhao (Auburn University)
Tianshi Che (Auburn University)
Lingjuan Lyu (Sony AI)

More from the Same Authors