Understanding the inner working functionality of large-scale deep neural networks is challenging yet crucial in several high-stakes applications. Mechanistic inter- pretability is an emergent field that tackles this challenge, often by identifying human-understandable subgraphs in deep neural networks known as circuits. In vision-pretrained models, these subgraphs are usually interpreted by visualizing their node features through a popular technique called feature visualization. Recent works have analyzed the stability of different feature visualization types under the adversarial model manipulation framework. This paper starts by addressing limitations in existing works by proposing a novel attack called ProxPulse that simultaneously manipulates the two types of feature visualizations. Surprisingly, when analyzing these attacks under the umbrella of visual circuits, we find that visual circuits show some robustness to ProxPulse. We, therefore, introduce a new attack based on ProxPulse that unveils the manipulability of visual circuits, shedding light on their lack of robustness. The effectiveness of these attacks is validated using pre-trained AlexNet and ResNet-50 models on ImageNet.
翻译:理解大规模深度神经网络的内在工作机制具有挑战性,但在若干高风险应用中至关重要。机械可解释性是一个新兴领域,旨在应对这一挑战,其常用方法是在深度神经网络中识别被称为“回路”的人类可理解子图。在视觉预训练模型中,这些子图通常通过一种称为特征可视化的流行技术,对其节点特征进行可视化来加以解释。近期研究在对抗性模型操纵框架下分析了不同类型特征可视化的稳定性。本文首先通过提出一种名为ProxPulse的新型攻击方法,同时操纵两种类型的特征可视化,以解决现有工作的局限性。令人惊讶的是,当在视觉回路的范畴下分析这些攻击时,我们发现视觉回路对ProxPulse表现出一定的鲁棒性。因此,我们引入了一种基于ProxPulse的新攻击方法,揭示了视觉回路的可操纵性,从而阐明其鲁棒性的不足。这些攻击的有效性通过在ImageNet上预训练的AlexNet和ResNet-50模型得到了验证。