Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

Forgetting Data from Pre-trained GANs

Zhifeng Kong · Kamalika Chaudhuri


Abstract:

Large pre-trained generative models are known to occasionally output undesirable samples, which contributes to their trustworthiness. The common way to mitigate this is to re-train them differently from scratch using different data or different regularization – which uses a lot of computational resources and does not always fully address the problem. In this work, we take a different, more compute-friendly approach and investigate how to post-edit a model after training so that it “forgets”, or refrains from outputting certain kinds of samples. We show that forgetting is different from data deletion, and data deletion may not always lead to forgetting. We then consider Generative Adversarial Networks (GANs), and provide three different algorithms for data forgetting that differ on how the samples to be forgotten are described. Extensive evaluations on real-world image datasets show that our algorithms out-perform data deletion baselines, and are capable of forgetting data while retaining high generation quality at a fraction of the cost of full re-training.

Chat is not available.