LLMs are commonly used in retrieval-augmented applications to execute user instructions based on data from external sources. For example, modern search engines use LLMs to answer queries based on relevant search results; email plugins summarize emails by processing their content through an LLM. However, the potentially untrusted provenance of these data sources can lead to prompt injection attacks, where the LLM is manipulated by natural language instructions embedded in the external data, causing it to deviate from the user's original instruction(s). We define this deviation as task drift. Task drift is a significant concern as it allows attackers to exfiltrate data or influence the LLM's output for other users. We study LLM activations as a solution to detect task drift, showing that activation deltas - the difference in activations before and after processing external data - are strongly correlated with this phenomenon. Through two probing methods, we demonstrate that a simple linear classifier can detect drift with near-perfect ROC AUC on an out-of-distribution test set. We evaluate these methods by making minimal assumptions about how users' tasks, system prompts, and attacks can be phrased. We observe that this approach generalizes surprisingly well to unseen task domains, such as prompt injections, jailbreaks, and malicious instructions, without being trained on any of these attacks. Interestingly, the fact that this solution does not require any modifications to the LLM (e.g., fine-tuning), as well as its compatibility with existing meta-prompting solutions, makes it cost-efficient and easy to deploy. To encourage further research on activation-based task inspection, decoding, and interpretability, we release our large-scale TaskTracker toolkit, featuring a dataset of over 500K instances, representations from six SoTA language models, and a suite of inspection tools.
翻译:大型语言模型(LLM)在检索增强型应用中常被用于根据外部数据源执行用户指令。例如,现代搜索引擎利用LLM基于相关搜索结果回答查询;邮件插件通过LLM处理邮件内容生成摘要。然而,这些数据源可能存在不可信的来源,导致提示注入攻击——攻击者通过嵌入外部数据中的自然语言指令操纵LLM,使其偏离用户的原始指令。我们将这种偏离定义为任务漂移。任务漂移是严重的安全隐患,攻击者可借此窃取数据或影响LLM对其他用户的输出。本研究以LLM激活状态作为检测任务漂移的解决方案,证明激活差异(即处理外部数据前后激活状态的变化)与该现象高度相关。通过两种探测方法,我们证明简单的线性分类器能在分布外测试集上实现接近完美的ROC AUC指标来检测漂移。我们在对用户任务、系统提示和攻击表述方式做出最少假设的前提下评估这些方法,发现该方法对未见过的任务领域(如提示注入、越狱攻击和恶意指令)展现出惊人的泛化能力,且无需针对任何攻击进行训练。值得注意的是,该方案无需对LLM进行任何修改(如微调),且与现有元提示解决方案兼容,具有成本效益和易部署优势。为促进基于激活的任务检查、解码和可解释性研究,我们开源了大规模TaskTracker工具包,包含超过50万条实例的数据集、六个前沿语言模型的表征数据以及一套检查工具。