Retrieval-Augmented Generation (RAG) is a state-of-the-art technique that mitigates issues such as hallucinations and knowledge staleness in Large Language Models (LLMs) by retrieving relevant knowledge from an external database to assist in content generation. Existing research has demonstrated potential privacy risks associated with the LLMs of RAG. However, the privacy risks posed by the integration of an external database, which often contains sensitive data such as medical records or personal identities, have remained largely unexplored. In this paper, we aim to bridge this gap by focusing on membership privacy of RAG's external database, with the aim of determining whether a given sample is part of the RAG's database. Our basic idea is that if a sample is in the external database, it will exhibit a high degree of semantic similarity to the text generated by the RAG system. We present S$^2$MIA, a \underline{M}embership \underline{I}nference \underline{A}ttack that utilizes the \underline{S}emantic \underline{S}imilarity between a given sample and the content generated by the RAG system. With our proposed S$^2$MIA, we demonstrate the potential to breach the membership privacy of the RAG database. Extensive experiment results demonstrate that S$^2$MIA can achieve a strong inference performance compared with five existing MIAs, and is able to escape from the protection of three representative defenses.
翻译:检索增强生成(RAG)是一种前沿技术,它通过从外部数据库中检索相关知识以辅助内容生成,从而缓解大型语言模型(LLM)中的幻觉和知识陈旧等问题。现有研究已揭示了与RAG中LLM相关的潜在隐私风险。然而,外部数据库(通常包含医疗记录或个人身份等敏感数据)的集成所带来的隐私风险在很大程度上仍未得到探索。本文旨在填补这一空白,重点关注RAG外部数据库的成员隐私,即判断给定样本是否属于RAG数据库的一部分。我们的基本思路是:若样本存在于外部数据库中,其与RAG系统生成文本之间将表现出高度的语义相似性。我们提出了S$^2$MIA,这是一种利用给定样本与RAG系统生成内容之间语义相似性的成员推断攻击方法。通过我们提出的S$^2$MIA,我们展示了突破RAG数据库成员隐私保护的可能性。大量实验结果表明,与五种现有成员推断攻击方法相比,S$^2$MIA能实现更强的推断性能,并能规避三种代表性防御机制的保护。