We interpret the function of individual neurons in CLIP by automatically describing them using text. Analyzing the direct effects (i.e. the flow from a neuron through the residual stream to the output) or the indirect effects (overall contribution) fails to capture the neurons' function in CLIP. Therefore, we present the "second-order lens", analyzing the effect flowing from a neuron through the later attention heads, directly to the output. We find that these effects are highly selective: for each neuron, the effect is significant for <2% of the images. Moreover, each effect can be approximated by a single direction in the text-image space of CLIP. We describe neurons by decomposing these directions into sparse sets of text representations. The sets reveal polysemantic behavior - each neuron corresponds to multiple, often unrelated, concepts (e.g. ships and cars). Exploiting this neuron polysemy, we mass-produce "semantic" adversarial examples by generating images with concepts spuriously correlated to the incorrect class. Additionally, we use the second-order effects for zero-shot segmentation and attribute discovery in images. Our results indicate that a scalable understanding of neurons can be used for model deception and for introducing new model capabilities.
翻译:我们通过自动使用文本来描述CLIP中单个神经元的功能,从而对其进行解释。分析直接效应(即从神经元通过残差流向输出的流动)或间接效应(总体贡献)均无法准确捕捉CLIP中神经元的功能。因此,我们提出"二阶透镜"方法,分析从神经元流经后续注意力头直接到达输出的效应。我们发现这些效应具有高度选择性:对于每个神经元,其效应仅对<2%的图像显著。此外,每个效应都可以用CLIP文本-图像空间中的单一方向来近似。我们通过将这些方向分解为稀疏的文本表示集合来描述神经元。这些集合揭示了多义性行为——每个神经元对应多个常常不相关的概念(例如船舶和汽车)。利用这种神经元多义性,我们通过生成与错误类别虚假相关的概念图像,批量生产"语义"对抗样本。此外,我们利用二阶效应进行图像中的零样本分割和属性发现。我们的研究结果表明,对神经元的可扩展理解可用于模型欺骗和引入新的模型能力。