Large language models (LLMs) have shown promising potential for next Point-of-Interest (POI) recommendation. However, existing methods only perform direct zero-shot prompting, leading to ineffective extraction of user preferences, insufficient injection of collaborative signals, and a lack of user privacy protection. As such, we propose a novel Multitask Reflective Large Language Model for Privacy-preserving Next POI Recommendation (MRP-LLM), aiming to exploit LLMs for better next POI recommendation while preserving user privacy. Specifically, the Multitask Reflective Preference Extraction Module first utilizes LLMs to distill each user's fine-grained (i.e., categorical, temporal, and spatial) preferences into a knowledge base (KB). The Neighbor Preference Retrieval Module retrieves and summarizes the preferences of similar users from the KB to obtain collaborative signals. Subsequently, aggregating the user's preferences with those of similar users, the Multitask Next POI Recommendation Module generates the next POI recommendations via multitask prompting. Meanwhile, during data collection, a Privacy Transmission Module is specifically devised to preserve sensitive POI data. Extensive experiments on three real-world datasets demonstrate the efficacy of our proposed MRP-LLM in providing more accurate next POI recommendations with user privacy preserved.
翻译:大语言模型(LLMs)在下一兴趣点(POI)推荐任务中展现出巨大潜力。然而,现有方法仅采用直接零样本提示,导致用户偏好提取效率低下、协同信号注入不足,且缺乏用户隐私保护。为此,我们提出了一种新颖的用于隐私保护下一POI推荐的多任务反思型大语言模型(MRP-LLM),旨在利用LLMs实现更优的下一POI推荐,同时保护用户隐私。具体而言,多任务反思型偏好提取模块首先利用LLMs将每位用户的细粒度(即类别、时间与空间)偏好提炼至知识库(KB)中。邻居偏好检索模块从知识库中检索并汇总相似用户的偏好以获取协同信号。随后,通过将用户自身偏好与相似用户偏好进行聚合,多任务下一POI推荐模块借助多任务提示生成下一POI推荐结果。与此同时,在数据收集阶段,我们专门设计了隐私传输模块以保护敏感POI数据。在三个真实数据集上的大量实验表明,我们提出的MRP-LLM能够在保护用户隐私的同时,提供更准确的下一POI推荐。