Large language models (LLMs) have been serving as effective backbones for retrieval systems, including Retrieval-Augmentation-Generation (RAG), Dense Information Retriever (IR), and Agent Memory Retrieval. Recent studies have demonstrated that such LLM-based Retrieval (LLMR) is vulnerable to adversarial attacks, which manipulates documents by token-level injections and enables adversaries to either boost or diminish these documents in retrieval tasks. However, existing attack studies mainly (1) presume a known query is given to the attacker, and (2) highly rely on access to the victim model's parameters or interactions, which are hardly accessible in real-world scenarios, leading to limited validity. To further explore the secure risks of LLMR, we propose a practical black-box attack method that generates transferable injection tokens based on zero-shot surrogate LLMs without need of victim queries or victim models knowledge. The effectiveness of our attack raises such a robustness issue that similar effects may arise from benign or unintended document edits in the real world. To achieve our attack, we first establish a theoretical framework of LLMR and empirically verify it. Under the framework, we simulate the transferable attack as a min-max problem, and propose an adversarial learning mechanism that finds optimal adversarial tokens with learnable query samples. Our attack is validated to be effective on benchmark datasets across popular LLM retrievers.
翻译:大型语言模型(LLM)已成为检索系统的有效骨干,包括检索增强生成(RAG)、密集信息检索(IR)和智能体记忆检索。近期研究表明,此类基于LLM的检索(LLMR)易受对抗性攻击,攻击者通过词元级注入操纵文档,从而在检索任务中提升或抑制目标文档的排名。然而,现有攻击研究主要存在两个局限:(1)预设攻击者已知特定查询;(2)高度依赖对受害者模型参数或交互记录的访问权限,这在实际场景中难以实现,导致攻击有效性受限。为深入探究LLMR的安全风险,我们提出一种实用的黑盒攻击方法,该方法基于零样本替代LLM生成可迁移的注入词元,无需受害者查询或受害者模型知识。本攻击的有效性揭示了LLMR的鲁棒性隐患——现实世界中良性或无意的文档修改可能引发类似效应。为实现攻击目标,我们首先建立LLMR的理论框架并进行实证验证。在该框架下,我们将可迁移攻击建模为极小极大优化问题,并提出一种对抗学习机制,通过可学习的查询样本寻找最优对抗词元。实验验证表明,本攻击方法在多个基准数据集上对主流LLM检索器均具有显著攻击效果。