Large language models (LLMs) have demonstrated prominent reasoning capabilities in recommendation tasks by transforming them into text-generation tasks. However, existing approaches either disregard or ineffectively model the user-item high-order interactions. To this end, this paper presents an enhanced LLM-based recommender (ELMRec). We enhance whole-word embeddings to substantially enhance LLMs' interpretation of graph-constructed interactions for recommendations, without requiring graph pre-training. This finding may inspire endeavors to incorporate rich knowledge graphs into LLM-based recommenders via whole-word embedding. We also found that LLMs often recommend items based on users' earlier interactions rather than recent ones, and present a reranking solution. Our ELMRec outperforms state-of-the-art (SOTA) methods in both direct and sequential recommendations.
翻译:大语言模型(LLMs)通过将推荐任务转化为文本生成任务,已展现出卓越的推理能力。然而,现有方法要么忽视了用户-物品高阶交互,要么未能对其进行有效建模。为此,本文提出了一种增强型基于大语言模型的推荐系统(ELMRec)。我们通过增强全词嵌入,显著提升了LLMs对基于图构建的交互关系在推荐任务中的理解能力,且无需进行图预训练。这一发现可能启发通过全词嵌入将丰富知识图谱整合到基于LLM的推荐系统中的研究尝试。我们还发现LLMs往往基于用户早期交互而非近期交互进行推荐,并提出了一种重排序解决方案。我们的ELMRec在直接推荐和序列推荐任务中均优于现有最先进(SOTA)方法。