Multimodal recommender systems (MMRSs) enhance collaborative filtering by leveraging item-side modalities, but their reliance on a fixed set of modalities and task-specific objectives limits both modality extensibility and task generalization. We propose E-MMKGR, a framework that constructs an e-commerce-specific Multimodal Knowledge Graph E-MMKG and learns unified item representations through GNN-based propagation and KG-oriented optimization. These representations provide a shared semantic foundation applicable to diverse tasks. Experiments on real-world Amazon datasets show improvements of up to 10.18% in Recall@10 for recommendation and up to 21.72% over vector-based retrieval for product search, demonstrating the effectiveness and extensibility of our approach.
翻译:多模态推荐系统通过利用商品侧的多模态信息来增强协同过滤,但其对固定模态集合和任务特定目标的依赖限制了模态可扩展性与任务泛化能力。我们提出了E-MMKGR框架,该框架构建了一个面向电子商务的多模态知识图谱E-MMKG,并通过基于图神经网络的传播和面向知识图谱的优化来学习统一的商品表征。这些表征为多样化任务提供了共享的语义基础。在真实亚马逊数据集上的实验表明,在推荐任务中Recall@10指标最高提升10.18%,在产品搜索任务中较基于向量的检索方法最高提升21.72%,验证了所提方法的有效性与可扩展性。