Large language models (LLMs) based recommender systems (RecSys) can adapt to different domains flexibly. It utilizes in-context learning (ICL), i.e., prompts, to customize the recommendation functions, which include sensitive historical user-specific item interactions, encompassing implicit feedback such as clicked items and explicit product reviews. Such private information may be exposed by novel privacy attacks. However, no study has been conducted on this important issue. We design several membership inference attacks (MIAs) aimed to revealing whether system prompts include victims' historical interactions. The attacks are \emph{Similarity, Memorization, Inquiry, and Poisoning attacks}, each utilizing unique features of LLMs or RecSys. We have carefully evaluated them on five of the latest open-source LLMs and three well-known RecSys benchmark datasets. The results confirm that the MIA threat to LLM RecSys is realistic: inquiry and poisoning attacks show significantly high attack advantages. We also discussed possible methods to mitigate such MIA threats. We have also analyzed the factors affecting these attacks, such as the number of shots in system prompts, the position of the victim in the shots, the number of poisoning items in the prompt,etc.
翻译:基于大语言模型(LLM)的推荐系统(RecSys)能够灵活适应不同领域。它利用上下文学习(ICL),即提示,来定制推荐功能,这些提示包含敏感的历史用户特定物品交互信息,涵盖点击物品等隐式反馈和显式产品评论。此类私人信息可能遭受新型隐私攻击的泄露。然而,目前尚无研究探讨这一重要问题。我们设计了多种成员推理攻击(MIA),旨在揭示系统提示是否包含受害者的历史交互记录。这些攻击包括**相似性攻击、记忆化攻击、询问攻击和投毒攻击**,每种攻击都利用了LLM或RecSys的独特特性。我们在五种最新的开源LLM和三个知名的RecSys基准数据集上对其进行了细致评估。结果证实,针对LLM RecSys的MIA威胁是真实存在的:询问攻击和投毒攻击显示出显著较高的攻击优势。我们还讨论了缓解此类MIA威胁的可能方法。此外,我们分析了影响这些攻击的因素,例如系统提示中的示例数量、受害者在示例中的位置、提示中投毒物品的数量等。