Large language models (LLMs) have demonstrated prominent reasoning capabilities in recommendation tasks by transforming them into text-generation tasks. % many NLP applications including However, existing approaches either disregard or ineffectively model the user--item high-order interactions. To this end, this paper presents an enhanced LLM-based recommender (ELMRec). We enhance whole-word embeddings to substantially enhance LLMs' interpretation of graph-constructed interactions for recommendations, without requiring graph pre-training. This finding may inspire endeavors to incorporate rich knowledge graphs into LLM-based recommenders via whole-word embedding. We also found that LLMs often recommend items based on users' earlier interactions rather than recent ones, and present a reranking solution. Our ELMRec outperforms state-of-the-art (SOTA) methods in both direct and sequential recommendations.
翻译:大语言模型通过将推荐任务转化为文本生成任务,在推荐系统中展现出卓越的推理能力。然而,现有方法要么忽略了用户-物品高阶交互,要么未能对其进行有效建模。为此,本文提出一种增强型基于大语言模型的推荐系统。我们通过增强全词嵌入技术,显著提升了大语言模型对基于图结构构建的交互关系在推荐任务中的理解能力,且无需进行图预训练。这一发现可能为通过全词嵌入将丰富知识图谱整合到基于大语言模型的推荐系统中提供启发。我们还发现大语言模型往往依据用户早期交互而非近期交互进行推荐,并提出了相应的重排序解决方案。我们的模型在直接推荐和序列推荐任务中均优于现有最先进方法。