The widespread adoption of Large Language Models (LLMs), exemplified by OpenAI's ChatGPT, brings to the forefront the imperative to defend against adversarial threats on these models. These attacks, which manipulate an LLM's output by introducing malicious inputs, undermine the model's integrity and the trust users place in its outputs. In response to this challenge, our paper presents an innovative defensive strategy, given white box access to an LLM, that harnesses residual activation analysis between transformer layers of the LLM. We apply a novel methodology for analyzing distinctive activation patterns in the residual streams for attack prompt classification. We curate multiple datasets to demonstrate how this method of classification has high accuracy across multiple types of attack scenarios, including our newly-created attack dataset. Furthermore, we enhance the model's resilience by integrating safety fine-tuning techniques for LLMs in order to measure its effect on our capability to detect attacks. The results underscore the effectiveness of our approach in enhancing the detection and mitigation of adversarial inputs, advancing the security framework within which LLMs operate.
翻译:以OpenAI的ChatGPT为代表的大型语言模型(LLMs)的广泛采用,使得防御针对这些模型的对抗性威胁变得至关重要。此类攻击通过引入恶意输入来操纵LLM的输出,损害模型的完整性及用户对其输出的信任。为应对这一挑战,本文在拥有LLM白盒访问权限的前提下,提出了一种创新的防御策略,该策略利用LLM Transformer层间的残差激活分析。我们采用一种新颖的方法来分析残差流中的独特激活模式,以实现对攻击提示的分类。我们构建了多个数据集,以证明这种分类方法在多种攻击场景(包括我们新创建的攻击数据集)中均具有高准确率。此外,我们通过集成针对LLM的安全性微调技术来增强模型的鲁棒性,从而评估该技术对我们攻击检测能力的影响。结果凸显了我们的方法在增强对抗性输入检测与缓解方面的有效性,推进了LLM运行所在的安全框架。