Sequential recommender systems rank relevant items by modeling a user's interaction history and computing the inner product between the resulting user representation and stored item embeddings. To avoid the significant memory overhead of storing large item sets, the generative recommendation paradigm instead models each item as a series of discrete semantic codes. Here, the next item is predicted by an autoregressive model that generates the code sequence corresponding to the predicted item. However, despite promising ranking capabilities on small datasets, these methods have yet to surpass traditional sequential recommenders on large item sets, limiting their adoption in the very scenarios they were designed to address. To resolve this, we propose MSCGRec, a Multimodal Semantic and Collaborative Generative Recommender. MSCGRec incorporates multiple semantic modalities and introduces a novel self-supervised quantization learning approach for images based on the DINO framework. Additionally, MSCGRec fuses collaborative and semantic signals by extracting collaborative features from sequential recommenders and treating them as a separate modality. Finally, we propose constrained sequence learning that restricts the large output space during training to the set of permissible tokens. We empirically demonstrate on three large real-world datasets that MSCGRec outperforms both sequential and generative recommendation baselines and provide an extensive ablation study to validate the impact of each component.
翻译:序列推荐系统通过建模用户的交互历史,并计算所得用户表征与存储的物品嵌入之间的内积,对相关物品进行排序。为避免存储大规模物品集所带来的显著内存开销,生成式推荐范式将每个物品建模为一系列离散语义码。在此范式中,下一个物品的预测通过自回归模型实现,该模型生成与预测物品对应的码序列。然而,尽管这些方法在小数据集上展现出有前景的排序能力,它们尚未在大型物品集上超越传统的序列推荐器,这限制了其在设计初衷所要应对的场景中的实际应用。为解决这一问题,我们提出MSCGRec,一种多模态语义与协同生成式推荐器。MSCGRec融合了多种语义模态,并基于DINO框架提出了一种新颖的面向图像的自监督量化学习方法。此外,MSCGRec通过从序列推荐器中提取协同特征并将其视为独立模态,实现了协同信号与语义信号的融合。最后,我们提出约束序列学习,在训练期间将庞大的输出空间限制在允许的标记集合内。我们在三个大规模真实世界数据集上实证表明,MSCGRec在性能上超越了序列推荐与生成式推荐的基线方法,并通过详尽的消融研究验证了各组件的影响。