Skip to yearly menu bar Skip to main content


Poster

Data Attribution for Text-to-Image Models by Unlearning Synthesized Images

Sheng-Yu Wang · Alexei Efros · Aaron Hertzmann · Jun-Yan Zhu · Richard Zhang

East Exhibit Hall A-C #2603
[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

The problem of data attribution for large text-to-image models is to identify the training images that most influenced the generation of a new image. We can define "influence" by saying that, for a given output, if a model is retrained from scratch without that output's most-influential images, the model should then fail to generate that output image. Unfortunately, directly searching for these influential images is computationally infeasible, since it would require repeatedly retraining from scratch. We propose a new approach that efficiently identifies highly-influential images. Specifically, we simulate unlearning the synthesized image: removing the model's ability to generate the image. We propose an unlearning algorithm that increases the training loss on the output image, without catastrophic forgetting of other, unrelated concepts. Then, we find training images that are forgotten by proxy, by identifying ones with significant deviations in their loss after the unlearning process, and label these as influential. We evaluate our method with a computationally-intensive but "gold-standard" retraining from scratch, and demonstrate our method's advantages over previous methods.

Live content is unavailable. Log in and register to view live content