Diffusion models have achieved remarkable success in Text-to-Image generation tasks, leading to the development of many commercial models. However, recent studies have reported that diffusion models often generate replicated images in train data when triggered by specific prompts, potentially raising social issues ranging from copyright to privacy concerns. To sidestep the memorization, there have been recent studies for developing memorization mitigation methods for diffusion models. Nevertheless, the lack of benchmarks impedes the assessment of the true effectiveness of these methods. In this work, we present MemBench, the first benchmark for evaluating image memorization mitigation methods. Our benchmark includes a large number of memorized image trigger prompts in various Text-to-Image diffusion models. Furthermore, in contrast to the prior work evaluating mitigation performance only on trigger prompts, we present metrics evaluating on both trigger prompts and general prompts, so that we can see whether mitigation methods address the memorization issue while maintaining performance for general prompts. This is an important development considering the practical applications which previous works have overlooked. Through evaluation on MemBench, we verify that the performance of existing image memorization mitigation methods is still insufficient for application to diffusion models. The code and datasets are available at https://github.com/chunsanHong/MemBench\_code.
翻译:扩散模型在文本到图像生成任务中取得了显著成功,推动了众多商业模型的发展。然而,近期研究表明,当被特定提示触发时,扩散模型常常生成训练数据中的重复图像,这可能引发从版权到隐私等一系列社会问题。为规避记忆问题,近期已有研究致力于开发针对扩散模型的记忆缓解方法。然而,缺乏基准测试阻碍了对这些方法实际有效性的评估。本文提出MemBench——首个用于评估图像记忆缓解方法的基准测试框架。我们的基准包含针对多种文本到图像扩散模型的大量记忆图像触发提示。此外,与先前仅针对触发提示评估缓解性能的研究不同,我们提出了同时针对触发提示和通用提示的评估指标,从而能够检验缓解方法在解决记忆问题的同时是否保持对通用提示的生成性能。考虑到实际应用中先前研究忽视的这一维度,本工作具有重要意义。通过在MemBench上的评估,我们验证了现有图像记忆缓解方法在应用于扩散模型时性能仍显不足。代码与数据集发布于https://github.com/chunsanHong/MemBench\_code。