Multimodal recommendation combines the user historical behaviors with the modal features of items to capture the tangible user preferences, presenting superior performance compared to the conventional ID-based recommender systems. However, existing methods still encounter two key problems in the representation learning of users and items, respectively: (1) the initialization of multimodal user representations is either agnostic to historical behaviors or contaminated by irrelevant modal noise, and (2) the widely used KNN-based item-item graph contains noisy edges with low similarities and lacks audience co-occurrence relationships. To address such issues, we propose MLLMRec, a novel preference reasoning paradigm with graph refinement for multimodal recommendation. Specifically, on the one hand, the item images are first converted into high-quality semantic descriptions using a multimodal large language model (MLLM), thereby bridging the semantic gap between visual and textual modalities. Then, we construct a behavioral description list for each user and feed it into the MLLM to reason about the purified user preference profiles that contain the latent interaction intents. On the other hand, we develop the threshold-controlled denoising and topology-aware enhancement strategies to refine the suboptimal item-item graph, thereby improving the accuracy of item representation learning. Extensive experiments on three publicly available datasets demonstrate that MLLMRec achieves the state-of-the-art performance with an average improvement of 21.48% over the optimal baselines. The source code is provided at https://github.com/Yuzhuo-Dang/MLLMRec.git.
翻译:多模态推荐通过结合用户历史行为与物品模态特征来捕捉显性用户偏好,相比传统基于ID的推荐系统展现出更优越的性能。然而,现有方法在用户与物品的表征学习中仍面临两个关键问题:(1)多模态用户表征的初始化要么与历史行为无关,要么受到无关模态噪声的污染;(2)广泛使用的基于KNN的物品-物品图包含相似度较低的噪声边,且缺乏受众共现关系。为解决这些问题,本文提出MLLMRec——一种基于图优化的新型多模态推荐偏好推理范式。具体而言,一方面,我们首先通过多模态大语言模型(MLLM)将物品图像转换为高质量语义描述,从而弥合视觉与文本模态间的语义鸿沟。随后,为每个用户构建行为描述列表,并输入MLLM以推理出包含潜在交互意图的纯净用户偏好画像。另一方面,我们提出阈值控制去噪与拓扑感知增强策略,以优化次优的物品-物品图结构,从而提升物品表征学习的准确性。在三个公开数据集上的大量实验表明,MLLMRec实现了最先进的性能,较最优基线平均提升21.48%。源代码已发布于https://github.com/Yuzhuo-Dang/MLLMRec.git。