Large language models (LLMs) have achieved remarkable success due to their exceptional generative capabilities. Despite their success, they also have inherent limitations such as a lack of up-to-date knowledge and hallucination. Retrieval-Augmented Generation (RAG) is a state-of-the-art technique to mitigate those limitations. In particular, given a question, RAG retrieves relevant knowledge from a knowledge database to augment the input of the LLM. For instance, the retrieved knowledge could be a set of top-k texts that are most semantically similar to the given question when the knowledge database contains millions of texts collected from Wikipedia. As a result, the LLM could utilize the retrieved knowledge as the context to generate an answer for the given question. Existing studies mainly focus on improving the accuracy or efficiency of RAG, leaving its security largely unexplored. We aim to bridge the gap in this work. Particularly, we propose PoisonedRAG , a set of knowledge poisoning attacks to RAG, where an attacker could inject a few poisoned texts into the knowledge database such that the LLM generates an attacker-chosen target answer for an attacker-chosen target question. We formulate knowledge poisoning attacks as an optimization problem, whose solution is a set of poisoned texts. Depending on the background knowledge (e.g., black-box and white-box settings) of an attacker on the RAG, we propose two solutions to solve the optimization problem, respectively. Our results on multiple benchmark datasets and LLMs show our attacks could achieve 90% attack success rates when injecting 5 poisoned texts for each target question into a database with millions of texts. We also evaluate recent defenses and our results show they are insufficient to defend against our attacks, highlighting the need for new defenses.
翻译:大型语言模型(LLMs)因其卓越的生成能力取得了显著成功。然而,它们也存在固有局限性,例如缺乏最新知识和产生幻觉。检索增强生成(RAG)是缓解这些局限性的前沿技术。具体而言,给定一个问题,RAG从知识数据库中检索相关知识以增强LLM的输入。例如,当知识数据库包含从维基百科收集的数百万条文本时,检索到的知识可能是与给定问题语义最相似的前k条文本。这样,LLM能够将检索到的知识作为上下文,为给定问题生成答案。现有研究主要关注提升RAG的准确性或效率,而其安全性问题尚未得到充分探索。本研究旨在填补这一空白。我们提出PoisonedRAG——一套针对RAG的知识投毒攻击方法,攻击者通过向知识数据库注入少量有毒文本,使得LLM对攻击者指定的目标问题生成其预设的目标答案。我们将知识投毒攻击建模为优化问题,其求解结果是一组有毒文本。根据攻击者对RAG背景知识的掌握程度(例如黑盒和白盒场景),我们分别提出两种求解该优化问题的方案。在多个基准数据集和LLM上的实验表明,当向包含数百万条文本的数据库中对每个目标问题仅注入5条有毒文本时,我们的攻击可实现90%的攻击成功率。我们还评估了现有防御方法,结果表明它们不足以抵御我们的攻击,这凸显了开发新型防御手段的必要性。