Text-to-image (T2I) diffusion models have shown exceptional capabilities in generating images that closely correspond to textual prompts. However, the advancement of T2I diffusion models presents significant risks, as the models could be exploited for malicious purposes, such as generating images with violence or nudity, or creating unauthorized portraits of public figures in inappropriate contexts. To mitigate these risks, concept removal methods have been proposed. These methods aim to modify diffusion models to prevent the generation of malicious and unwanted concepts. Despite these efforts, existing research faces several challenges: (1) a lack of consistent comparisons on a comprehensive dataset, (2) ineffective prompts in harmful and nudity concepts, (3) overlooked evaluation of the ability to generate the benign part within prompts containing malicious concepts. To address these gaps, we propose to benchmark the concept removal methods by introducing a new dataset, Six-CD, along with a novel evaluation metric. In this benchmark, we conduct a thorough evaluation of concept removals, with the experimental observations and discussions offering valuable insights in the field.
翻译:文本到图像(T2I)扩散模型在生成与文本提示高度吻合的图像方面展现出卓越能力。然而,T2I扩散模型的发展也带来了重大风险,因为这些模型可能被恶意利用,例如生成包含暴力或裸露内容的图像,或在不当情境下创建公众人物的未经授权肖像。为降低这些风险,研究者提出了概念移除方法。这些方法旨在修改扩散模型,以防止生成恶意及不良概念。尽管已有这些努力,现有研究仍面临若干挑战:(1)缺乏在综合性数据集上的一致性比较;(2)在有害与裸露概念上的提示效果不佳;(3)忽视了对包含恶意概念的提示中良性部分生成能力的评估。为填补这些空白,我们提出通过引入一个新数据集Six-CD及一种新颖的评估指标,对概念移除方法进行基准测试。在此基准中,我们对概念移除进行了全面评估,实验观察与讨论为该领域提供了有价值的见解。