Instruction-tuned Large Language Models (LLMs) show impressive results in numerous practical applications, but they lack essential safety features that are common in other areas of computer science, particularly an explicit separation of instructions and data. This makes them vulnerable to manipulations such as indirect prompt injections and generally unsuitable for safety-critical tasks. Surprisingly, there is currently no established definition or benchmark to quantify this phenomenon. In this work, we close this gap by introducing a formal measure for instruction-data separation and an empirical variant that is calculable from a model's outputs. We also present a new dataset, SEP, that allows estimating the measure for real-world models. Our results on various LLMs show that the problem of instruction-data separation is real: all models fail to achieve high separation, and canonical mitigation techniques, such as prompt engineering and fine-tuning, either fail to substantially improve separation or reduce model utility. The source code and SEP dataset are openly accessible at https://github.com/egozverev/Shold-It-Be-Executed-Or-Processed.
翻译:指令调优的大型语言模型(LLMs)在众多实际应用中展现出令人瞩目的成果,但它们缺乏计算机科学其他领域中常见的关键安全特性,特别是对指令与数据的显式分离。这使得它们容易受到间接提示注入等操作的影响,通常不适用于安全关键型任务。令人惊讶的是,目前尚无量化这一现象的既定定义或基准。在本研究中,我们通过引入指令-数据分离的形式化度量方法及其可基于模型输出计算的实证变体来填补这一空白。我们还提出了新的数据集SEP,用于评估实际模型的分离程度。针对多种LLMs的实验结果表明,指令-数据分离问题确实存在:所有模型均未能实现高度分离,而典型的缓解技术(如提示工程和微调)要么无法显著改善分离效果,要么会降低模型实用性。源代码与SEP数据集已通过 https://github.com/egozverev/Shold-It-Be-Executed-Or-Processed 公开。