Large Language Models (LLMs), characterized by being trained on broad amounts of data in a self-supervised manner, have shown impressive performance across a wide range of tasks. Indeed, their generative abilities have aroused interest on the application of LLMs across a wide range of contexts. However, neural networks in general, and LLMs in particular, are known to be vulnerable to adversarial attacks, where an imperceptible change to the input can mislead the output of the model. This is a serious concern that impedes the use of LLMs on high-stakes applications, such as healthcare, where a wrong prediction can imply serious consequences. Even though there are many efforts on making LLMs more robust to adversarial attacks, there are almost no works that study \emph{how} and \emph{where} these vulnerabilities that make LLMs prone to adversarial attacks happen. Motivated by these facts, we explore how to localize and understand vulnerabilities, and propose a method, based on Mechanistic Interpretability (MI) techniques, to guide this process. Specifically, this method enables us to detect vulnerabilities related to a concrete task by (i) obtaining the subset of the model that is responsible for that task, (ii) generating adversarial samples for that task, and (iii) using MI techniques together with the previous samples to discover and understand the possible vulnerabilities. We showcase our method on a pretrained GPT-2 Small model carrying out the task of predicting 3-letter acronyms to demonstrate its effectiveness on locating and understanding concrete vulnerabilities of the model.
翻译:大型语言模型(LLMs)通过在海量数据上进行自监督训练,在广泛任务中展现出卓越性能。其生成能力激发了LLMs在多元场景中的应用探索。然而,神经网络(尤其是LLMs)普遍存在对抗攻击脆弱性——输入端的细微扰动即可导致模型输出错误。这一缺陷严重阻碍了LLMs在医疗等高风险领域的应用,因为错误预测可能引发严重后果。尽管已有诸多研究致力于提升LLMs的对抗鲁棒性,但针对脆弱性产生机理与位置的研究仍属空白。基于此,本文探索了脆弱性的定位与理解方法,并提出一种基于机制可解释性技术的系统性分析框架。该方法通过以下步骤检测特定任务相关漏洞:(1)提取模型中对应该任务的功能子集;(2)生成该任务的对抗样本;(3)结合机制可解释性技术与对抗样本解析潜在漏洞。我们在执行三字母缩写预测任务的预训练GPT-2 Small模型上验证了该方法,成功定位并解析了模型的具体脆弱性机制。