Despite their remarkable performance, large language models lack elementary safety features, and this makes them susceptible to numerous malicious attacks. In particular, previous work has identified the absence of an intrinsic separation between instructions and data as a root cause for the success of prompt injection attacks. In this work, we propose an architectural change, ASIDE, that allows the model to clearly separate between instructions and data by using separate embeddings for them. Instead of training the embeddings from scratch, we propose a method to convert an existing model to ASIDE form by using two copies of the original model's embeddings layer, and applying an orthogonal rotation to one of them. We demonstrate the effectiveness of our method by showing (1) highly increased instruction-data separation scores without a loss in model capabilities and (2) competitive results on prompt injection benchmarks, even without dedicated safety training. Additionally, we study the working mechanism behind our method through an analysis of model representations.
翻译:尽管大型语言模型展现出卓越的性能,但其缺乏基础的安全特性,这使得它们容易受到多种恶意攻击。特别是,先前的研究指出,指令与数据之间缺乏内在分离是提示注入攻击成功的根本原因。在本研究中,我们提出了一种架构改进方案ASIDE,通过为指令和数据使用独立的嵌入表示,使模型能够清晰地区分二者。我们并未从头开始训练嵌入层,而是提出了一种将现有模型转换为ASIDE形式的方法:复制原始模型的嵌入层为两份,并对其中一份施加正交旋转变换。我们通过以下实验证明了该方法的有效性:(1)在保持模型能力无损的前提下显著提升了指令-数据分离度评分;(2)即使在未进行专门安全训练的情况下,在提示注入基准测试中仍取得具有竞争力的结果。此外,我们通过分析模型表征进一步探究了该方法的运作机制。