Multimodal sequential recommendation (MSR) leverages diverse item modalities to improve recommendation accuracy, while achieving effective and adaptive fusion remains challenging. Existing MSR models often overlook synergistic information that emerges only through modality combinations. Moreover, they typically assume a fixed importance for different modality interactions across users. To address these limitations, we propose \textbf{P}ersonalized \textbf{R}ecommend-ation via \textbf{I}nformation \textbf{S}ynergy \textbf{M}odule (PRISM), a plug-and-play framework for sequential recommendation (SR). PRISM explicitly decomposes multimodal information into unique, redundant, and synergistic components through an Interaction Expert Layer and dynamically weights them via an Adaptive Fusion Layer guided by user preferences. This information-theoretic design enables fine-grained disentanglement and personalized fusion of multimodal signals. Extensive experiments on four datasets and three SR backbones demonstrate its effectiveness and versatility. The code is available at https://github.com/YutongLi2024/PRISM.
翻译:多模态序列推荐利用多样化的物品模态提升推荐准确性,然而实现有效且自适应的模态融合仍具挑战。现有MSR模型常忽视仅通过模态组合产生的协同信息,且通常假设不同模态交互对用户的重要性是固定的。为应对这些局限,我们提出基于信息协同模块的个性化推荐框架,即可插拔的序列推荐系统。PRISM通过交互专家层将多模态信息显式解耦为独特、冗余与协同分量,并经由用户偏好引导的自适应融合层进行动态加权。这种信息论设计实现了多模态信号的细粒度解耦与个性化融合。在四个数据集和三种SR骨干模型上的大量实验验证了其有效性与普适性。代码发布于https://github.com/YutongLi2024/PRISM。