Knowledge Graphs (KGs) represent relationships between entities in a graph structure and have been widely studied as promising tools for realizing recommendations that consider the accurate content information of items. However, traditional KG-based recommendation methods face fundamental challenges: insufficient consideration of temporal information and poor performance in cold-start scenarios. On the other hand, Large Language Models (LLMs) can be considered databases with a wealth of knowledge learned from the web data, and they have recently gained attention due to their potential application as recommendation systems. Although approaches that treat LLMs as recommendation systems can leverage LLMs' high recommendation literacy, their input token limitations make it impractical to consider the entire recommendation domain dataset and result in scalability issues. To address these challenges, we propose a LLM's Intuition-aware Knowledge graph Reasoning model (LIKR). Our main idea is to treat LLMs as reasoners that output intuitive exploration strategies for KGs. To integrate the knowledge of LLMs and KGs, we trained a recommendation agent through reinforcement learning using a reward function that integrates different recommendation strategies, including LLM's intuition and KG embeddings. By incorporating temporal awareness through prompt engineering and generating textual representations of user preferences from limited interactions, LIKR can improve recommendation performance in cold-start scenarios. Furthermore, LIKR can avoid scalability issues by using KGs to represent recommendation domain datasets and limiting the LLM's output to KG exploration strategies. Experiments on real-world datasets demonstrate that our model outperforms state-of-the-art recommendation methods in cold-start sequential recommendation scenarios.
翻译:知识图谱(KGs)以图结构表示实体间关系,作为能够考虑物品精确内容信息的推荐工具已得到广泛研究。然而,传统的基于知识图谱的推荐方法面临根本性挑战:对时序信息考虑不足以及在冷启动场景下性能不佳。另一方面,大型语言模型(LLMs)可被视为从网络数据中学习到丰富知识的数据库,近期因其作为推荐系统的潜在应用而受到关注。尽管将LLMs视为推荐系统的方法能够利用其高水平的推荐理解能力,但其输入标记限制使得考虑整个推荐领域数据集变得不切实际,并导致可扩展性问题。为应对这些挑战,我们提出了一种LLM直觉感知的知识图谱推理模型(LIKR)。我们的核心思想是将LLMs视为输出知识图谱直观探索策略的推理器。为整合LLMs与KGs的知识,我们通过强化学习训练推荐智能体,使用融合不同推荐策略(包括LLM直觉与KG嵌入)的奖励函数。通过提示工程融入时序感知,并从有限交互中生成用户偏好的文本表示,LIKR能够提升冷启动场景下的推荐性能。此外,LIKR通过使用KGs表示推荐领域数据集并将LLM输出限制为KG探索策略,能够避免可扩展性问题。在真实数据集上的实验表明,我们的模型在冷启动序列推荐场景中优于现有最先进的推荐方法。