Implicit assumptions and priors are often necessary in text-to-image generation tasks, especially when textual prompts lack sufficient context. However, these assumptions can sometimes reflect outdated concepts, inaccuracies, or societal bias embedded in the training data. We present Embedding-only Editing (Embedit), a method designed to efficiently adjust implict assumptions and priors in the model without affecting its interpretation of unrelated objects or overall performance. Given a "source" prompt (e.g., "rose") that elicits an implicit assumption (e.g., rose is red) and a "destination" prompt that specifies the desired attribute (e.g., "blue rose"), Embedit fine-tunes only the word token embedding (WTE) of the target object ("rose") to optimize the last hidden state of text encoder in Stable Diffusion, a SOTA text-to-image model. This targeted adjustment prevents unintended effects on other objects in the model's knowledge base, as the WTEs for unrelated objects and the model weights remain unchanged. Consequently, when a prompt does not contain the edited object, all representations, and the model outputs are identical to those of the original, unedited model. Our method is highly efficient, modifying only 768 parameters for Stable Diffusion 1.4 and 2048 for XL in a single edit, matching the WTE dimension of each respective model. This minimal scope, combined with rapid execution, makes Embedit highly practical for real-world applications. Additionally, changes are easily reversible by restoring the original WTE layers. Our experimental results demonstrate that Embedit consistently outperforms previous methods across various models, tasks, and editing scenarios (both single and sequential multiple edits), achieving at least a 6.01% improvement (from 87.17% to 93.18%).
翻译:在文本到图像生成任务中,隐式假设和先验通常是必要的,尤其是当文本提示缺乏足够上下文时。然而,这些假设有时会反映训练数据中存在的过时概念、不准确信息或社会偏见。我们提出了仅嵌入编辑方法,该方法旨在高效调整模型中的隐式假设和先验,同时不影响其对无关对象的解释或整体性能。给定一个引发隐式假设的"源"提示(例如"玫瑰",其隐式假设为"玫瑰是红色的")和一个指定期望属性的"目标"提示(例如"蓝色玫瑰"),Embedit仅微调目标对象("玫瑰")的词标记嵌入,以优化Stable Diffusion(一种先进的文本到图像模型)中文本编码器的最后隐藏状态。这种针对性调整可防止对模型知识库中其他对象产生意外影响,因为无关对象的词标记嵌入和模型权重保持不变。因此,当提示不包含编辑对象时,所有表征和模型输出均与原始未编辑模型完全相同。我们的方法具有极高效率,单次编辑仅修改Stable Diffusion 1.4的768个参数和XL版本的2048个参数,与各自模型的词标记嵌入维度相匹配。这种极小的修改范围与快速执行相结合,使得Embedit在实际应用中极具实用性。此外,通过恢复原始词标记嵌入层可轻松撤销更改。实验结果表明,在不同模型、任务和编辑场景(包括单次编辑和顺序多次编辑)中,Embedit始终优于先前方法,实现了至少6.01%的性能提升(从87.17%提升至93.18%)。