Retrieval-Augmented Generation (RAG) improves pre-trained models by incorporating external knowledge at test time to enable customized adaptation. We study the risk of datastore leakage in Retrieval-In-Context RAG Language Models (LMs). We show that an adversary can exploit LMs' instruction-following capabilities to easily extract text data verbatim from the datastore of RAG systems built with instruction-tuned LMs via prompt injection. The vulnerability exists for a wide range of modern LMs that span Llama2, Mistral/Mixtral, Vicuna, SOLAR, WizardLM, Qwen1.5, and Platypus2, and the exploitability exacerbates as the model size scales up. Extending our study to production RAG models GPTs, we design an attack that can cause datastore leakage with a 100% success rate on 25 randomly selected customized GPTs with at most 2 queries, and we extract text data verbatim at a rate of 41% from a book of 77,000 words and 3% from a corpus of 1,569,000 words by prompting the GPTs with only 100 queries generated by themselves.
翻译:检索增强生成(RAG)通过在测试时融入外部知识来改进预训练模型,从而实现定制化适应。我们研究了检索上下文RAG语言模型(LM)中数据存储泄露的风险。我们证明,攻击者可以利用LM的指令遵循能力,通过提示注入,轻易地从基于指令微调LM构建的RAG系统的数据存储中逐字提取文本数据。这一漏洞广泛存在于包括Llama2、Mistral/Mixtral、Vicuna、SOLAR、WizardLM、Qwen1.5和Platypus2在内的多种现代LM中,且随着模型规模增大,可利用性加剧。将研究扩展到生产级RAG模型GPTs,我们设计了一种攻击方法,在随机选取的25个定制GPTs上,仅需最多2次查询即可实现100%成功率的数存储泄露;通过仅使用GPTs自身生成的100条查询进行提示,我们成功从一本77,000词的书籍中逐字提取文本数据的比率为41%,从一个1,569,000词的语料库中提取的比率为3%。