The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image. Influence is defined such that, for a given output, if a model is retrained from scratch without the most influential images, the model would fail to reproduce the same output. Unfortunately, directly searching for these influential images is computationally infeasible, since it would require repeatedly retraining models from scratch. In our work, we propose an efficient data attribution method by simulating unlearning the synthesized image. We achieve this by increasing the training loss on the output image, without catastrophic forgetting of other, unrelated concepts. We then identify training images with significant loss deviations after the unlearning process and label these as influential. We evaluate our method with a computationally intensive but "gold-standard" retraining from scratch and demonstrate our method's advantages over previous methods.
翻译:文本到图像模型数据归因的目标是识别对生成新图像影响最大的训练图像。其影响定义为:对于给定输出,若从零开始重新训练模型时移除最具影响力的图像,模型将无法复现相同输出。然而,直接搜索这些影响图像在计算上不可行,因为这需要反复从头训练模型。本研究提出一种通过模拟遗忘合成图像的高效数据归因方法。我们通过增加输出图像上的训练损失来实现这一目标,同时避免对其他无关概念的灾难性遗忘。随后,我们识别在遗忘过程中出现显著损失偏差的训练图像,并将其标记为影响图像。我们通过计算密集但"黄金标准"的从头训练方法评估本方法,并证明其相较于先前方法的优势。