Large language models (LLMs) have demonstrated prominent reasoning capabilities in recommendation tasks by transforming them into text-generation tasks. However, existing approaches either disregard or ineffectively model the user-item high-order interactions. To this end, this paper presents an enhanced LLM-based recommender (ELMRec). We enhance whole-word embeddings to substantially enhance LLMs' interpretation of graph-constructed interactions for recommendations, without requiring graph pre-training. This finding may inspire endeavors to incorporate rich knowledge graphs into LLM-based recommenders via whole-word embedding. We also found that LLMs often recommend items based on users' earlier interactions rather than recent ones, and present a reranking solution. Our ELMRec outperforms state-of-the-art (SOTA) methods in both direct and sequential recommendations.
翻译:大型语言模型(LLM)通过将推荐任务转化为文本生成任务,已展现出卓越的推理能力。然而,现有方法要么忽略了用户-物品高阶交互,要么未能对其进行有效建模。为此,本文提出一种增强型基于LLM的推荐模型(ELMRec)。我们通过增强全词嵌入,显著提升了LLM对基于图构建的交互关系在推荐任务中的理解能力,且无需进行图预训练。这一发现可能为通过全词嵌入将丰富的知识图谱整合到基于LLM的推荐系统中提供启发。我们还发现,LLM通常依据用户早期交互而非近期交互进行推荐,并提出了相应的重排序解决方案。我们的ELMRec在直接推荐和序列推荐任务中均优于现有最优(SOTA)方法。