Identifying layers within text-to-image models which control visual attributes can facilitate efficient model editing through closed-form updates. Recent work, leveraging causal tracing show that early Stable-Diffusion variants confine knowledge primarily to the first layer of the CLIP text-encoder, while it diffuses throughout the UNet.Extending this framework, we observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing fails in pinpointing localized knowledge, highlighting challenges in model editing. To address this issue, we introduce the concept of Mechanistic Localization in text-to-image models, where knowledge about various visual attributes (e.g., "style", "objects", "facts") can be mechanistically localized to a small fraction of layers in the UNet, thus facilitating efficient model editing. We localize knowledge using our method LocoGen which measures the direct effect of intermediate layers to output generation by performing interventions in the cross-attention layers of the UNet. We then employ LocoEdit, a fast closed-form editing method across popular open-source text-to-image models (including the latest SD-XL)and explore the possibilities of neuron-level model editing. Using Mechanistic Localization, our work offers a better view of successes and failures in localization-based text-to-image model editing. Code will be available at https://github.com/samyadeepbasu/LocoGen.
翻译:识别文本到图像模型中控制视觉属性的层,可通过闭式更新实现高效模型编辑。先前工作利用因果追踪表明,早期Stable-Diffusion变体将知识主要限制在CLIP文本编码器的第一层,而知识在UNet中扩散。扩展此框架后,我们观察到对于近期模型(如SD-XL、DeepFloyd),因果追踪无法精确定位局部化知识,凸显了模型编辑中的挑战。为解决此问题,我们引入文本到图像模型中的机制性定位概念,其中关于各类视觉属性(如“风格”、“物体”、“事实”)的知识可被机制性定位到UNet中少量层,从而促进高效模型编辑。我们通过所提出的LocoGen方法实现知识定位,该方法通过干预UNet交叉注意力层来测量中间层对输出生成的直接影响。随后采用快速闭式编辑方法LocoEdit,应用于流行的开源文本到图像模型(包括最新SD-XL),并探索神经元级模型编辑的可能性。利用机制性定位,本研究为基于定位的文本到图像模型编辑的成功与失败提供了更清晰的视角。代码将于https://github.com/samyadeepbasu/LocoGen 提供。