Large language models (LLMs) based recommender systems (RecSys) can adapt to different domains flexibly. It utilizes in-context learning (ICL), i.e., prompts, to customize the recommendation functions, which include sensitive historical user-specific item interactions, encompassing implicit feedback such as clicked items and explicit product reviews. Such private information may be exposed by novel privacy attacks. However, no study has been conducted on this important issue. We design several membership inference attacks (MIAs) aimed to revealing whether system prompts include victims' historical interactions. The attacks are \emph{Similarity, Memorization, Inquiry, and Poisoning attacks}, each utilizing unique features of LLMs or RecSys. We have carefully evaluated them on five of the latest open-source LLMs and three well-known RecSys benchmark datasets. The results confirm that the MIA threat to LLM RecSys is realistic: inquiry and poisoning attacks show significantly high attack advantages. We also discussed possible methods to mitigate such MIA threats. We have also analyzed the factors affecting these attacks, such as the number of shots in system prompts, the position of the victim in the shots, the number of poisoning items in the prompt,etc.
翻译:基于大型语言模型(LLMs)的推荐系统(RecSys)能够灵活适应不同领域。该系统利用上下文学习(ICL),即提示词,来定制推荐功能,其中包含敏感的历史用户特定物品交互信息,涵盖点击物品等隐式反馈和显式产品评论。此类隐私信息可能遭受新型隐私攻击的泄露。然而,目前尚未有研究针对这一重要问题展开探讨。我们设计了多种成员推断攻击(MIAs),旨在揭示系统提示词是否包含受害者的历史交互记录。这些攻击包括**相似性攻击、记忆化攻击、查询攻击和投毒攻击**,每种攻击均利用了LLMs或RecSys的独特特性。我们在五种最新的开源LLMs和三个知名RecSys基准数据集上进行了细致评估。结果证实,针对LLM RecSys的MIA威胁是现实存在的:查询攻击和投毒攻击展现出显著的高攻击优势。我们还探讨了缓解此类MIA威胁的可能方法。同时,我们分析了影响这些攻击效果的因素,例如系统提示词中的样本数量、受害者在样本序列中的位置、提示词中投毒物品的数量等。