Sequential Recommender Systems (SRS) are extensively applied across various domains to predict users' next interaction by modeling their interaction sequences. However, these systems typically grapple with the long-tail problem, where they struggle to recommend items that are less popular. This challenge results in a decline in user discovery and reduced earnings for vendors, negatively impacting the system as a whole. Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity, positioning them as a viable solution to this dilemma. In our paper, we present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of SRS. To align the capabilities of general-purpose LLM with the needs of the recommendation domain, we introduce a method called Supervised Contrastive Fine-Tuning (SCFT). This method involves attribute-level data augmentation and a custom contrastive loss designed to tailor LLM for enhanced recommendation performance. Moreover, we highlight the necessity of incorporating collaborative filtering signals into LLM-generated embeddings and propose Recommendation Adaptation Training (RAT) for this purpose. RAT refines the embeddings to be optimally suited for SRS. The embeddings derived from LLMEmb can be easily integrated with any SRS model, showcasing its practical utility. Extensive experimentation on three real-world datasets has shown that LLMEmb significantly improves upon current methods when applied across different SRS models.
翻译:序列推荐系统(SRS)广泛应用于各个领域,通过对用户交互序列进行建模来预测其下一次交互。然而,这些系统通常面临长尾问题,难以推荐较冷门的物品。这一挑战导致用户探索性下降和供应商收益减少,对整个系统产生负面影响。大语言模型(LLM)具备理解物品间语义关联的潜力,无论其流行度如何,这使其成为解决此困境的可行方案。本文提出LLMEmb,一种创新技术,利用LLM生成能够增强SRS性能的物品嵌入。为使通用LLM的能力与推荐领域需求对齐,我们引入了监督对比微调(SCFT)方法。该方法包含属性级数据增强和专门设计的对比损失函数,旨在定制LLM以提升推荐性能。此外,我们强调将协同过滤信号融入LLM生成嵌入的必要性,并为此提出推荐适应训练(RAT)。RAT通过优化嵌入使其最适合SRS。LLMEmb生成的嵌入可轻松集成到任何SRS模型中,体现了其实用价值。在三个真实数据集上的大量实验表明,LLMEmb在不同SRS模型上应用时,较现有方法均有显著提升。