Large Language Models (LLMs) have recently emerged as a powerful backbone for recommender systems. Existing LLM-based recommender systems take two different approaches for representing items in natural language, i.e., Attribute-based Representation and Description-based Representation. In this work, we aim to address the trade-off between efficiency and effectiveness that these two approaches encounter, when representing items consumed by users. Based on our interesting observation that there is a significant information overlap between images and descriptions associated with items, we propose a novel method, Item representation for LLM-based Recommender system (I-LLMRec). Our main idea is to leverage images as an alternative to lengthy textual descriptions for representing items, aiming at reducing token usage while preserving the rich semantic information of item descriptions. Through extensive experiments, we demonstrate that I-LLMRec outperforms existing methods in both efficiency and effectiveness by leveraging images. Moreover, a further appeal of I-LLMRec is its ability to reduce sensitivity to noise in descriptions, leading to more robust recommendations.
翻译:大型语言模型(LLM)最近已成为推荐系统的强大骨干。现有的基于LLM的推荐系统采用两种不同的方法以自然语言表示物品,即基于属性的表征和基于描述的表征。在本工作中,我们旨在解决这两种方法在表示用户消费的物品时所面临的效率与效果之间的权衡。基于我们一个有趣的观察——与物品相关的图像和描述之间存在显著的信息重叠,我们提出了一种新方法,即用于基于LLM的推荐系统的物品表征(I-LLMRec)。我们的核心思想是利用图像作为冗长文本描述的替代方案来表示物品,旨在减少token使用量的同时保留物品描述的丰富语义信息。通过大量实验,我们证明I-LLMRec通过利用图像,在效率和效果上均优于现有方法。此外,I-LLMRec的另一个吸引之处在于其能够降低对描述中噪声的敏感性,从而产生更稳健的推荐。