Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While recent studies have demonstrated the potential of such attacks, they typically rely on impractical assumptions, such as white-box access or known user queries, thereby underestimating the difficulty of real-world exploitation. In this paper, we bridge this gap by proposing MIRAGE, a novel multi-stage poisoning pipeline designed for strict black-box and query-agnostic environments. Operating on surrogate model feedback, MIRAGE functions as an automated optimization framework that integrates three key mechanisms: it utilizes persona-driven query synthesis to approximate latent user search distributions, employs semantic anchoring to imperceptibly embed these intents for high retrieval visibility, and leverages an adversarial variant of Test-Time Preference Optimization (TPO) to maximize persuasion. To rigorously evaluate this threat, we construct a new benchmark derived from three long-form, domain-specific datasets. Extensive experiments demonstrate that MIRAGE significantly outperforms existing baselines in both attack efficacy and stealthiness, exhibiting remarkable transferability across diverse retriever-LLM configurations and highlighting the urgent need for robust defense strategies.
翻译:检索增强生成(RAG)系统通过外部知识增强大语言模型(LLM),但引入了关键的攻击面:语料库投毒。尽管近期研究已证明此类攻击的潜力,但它们通常依赖于不切实际的假设,例如白盒访问权限或已知用户查询,从而低估了实际攻击的难度。本文通过提出MIRAGE填补了这一空白——这是一种专为严格黑盒与查询无关环境设计的新型多阶段投毒流程。MIRAGE基于代理模型反馈运行,作为一个自动化优化框架整合了三个关键机制:利用角色驱动的查询合成来近似潜在用户搜索分布;采用语义锚定技术将查询意图以不可察觉的方式嵌入文本以实现高检索可见性;并借助测试时偏好优化(TPO)的对抗性变体来最大化说服效果。为严格评估此威胁,我们基于三个长文本领域特定数据集构建了新基准。大量实验表明,MIRAGE在攻击效能与隐蔽性方面均显著优于现有基线,在不同检索器-LLM配置间展现出卓越的迁移能力,凸显了构建鲁棒防御策略的紧迫性。