Identifying layers within text-to-image models which control visual attributes can facilitate efficient model editing through closed-form updates. Recent work, leveraging causal tracing show that early Stable-Diffusion variants confine knowledge primarily to the first layer of the CLIP text-encoder, while it diffuses throughout the UNet.Extending this framework, we observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing fails in pinpointing localized knowledge, highlighting challenges in model editing. To address this issue, we introduce the concept of Mechanistic Localization in text-to-image models, where knowledge about various visual attributes (e.g., ``style", ``objects", ``facts") can be mechanistically localized to a small fraction of layers in the UNet, thus facilitating efficient model editing. We localize knowledge using our method LocoGen which measures the direct effect of intermediate layers to output generation by performing interventions in the cross-attention layers of the UNet. We then employ LocoEdit, a fast closed-form editing method across popular open-source text-to-image models (including the latest SD-XL)and explore the possibilities of neuron-level model editing. Using Mechanistic Localization, our work offers a better view of successes and failures in localization-based text-to-image model editing. Code will be available at \href{https://github.com/samyadeepbasu/LocoGen}{https://github.com/samyadeepbasu/LocoGen}.
翻译:识别文生图模型中控制视觉属性的网络层,可通过闭式更新实现高效模型编辑。近期研究利用因果追踪表明,早期Stable-Diffusion变体将知识主要限定在CLIP文本编码器的首层,而知识随后扩散至整个UNet网络。通过扩展这一框架,我们观察到在最新模型(如SD-XL、DeepFloyd)中,因果追踪无法精准定位局部化知识,揭示了模型编辑面临的挑战。针对该问题,我们提出文生图模型中的机制性定位概念——关于多种视觉属性(如"风格"、"对象"、"事实")的知识可被机制性地定位至UNet网络中的少数层,从而实现高效模型编辑。我们通过LocoGen方法定位知识:该方法通过在UNet交叉注意力层执行干预操作,测量中间层对输出生成的直接效应。继而采用LocoEdit——一种适用于主流开源文生图模型(包括最新SD-XL)的快速闭式编辑方法,探索神经元级模型编辑的可能性。基于机制性定位,本研究为定位式文生图模型编辑的成功与失败提供了更清晰的视角。代码将发布于\href{https://github.com/samyadeepbasu/LocoGen}{https://github.com/samyadeepbasu/LocoGen}。