Integrating diverse data modalities is crucial for enhancing the performance of personalized recommendation systems. Traditional models, which often rely on singular data sources, lack the depth needed to accurately capture the multifaceted nature of item features and user behaviors. This paper introduces a novel framework for multi-behavior recommendations, leveraging the fusion of triple-modality, which is visual, textual, and graph data through alignment with large language models (LLMs). By incorporating visual information, we capture contextual and aesthetic item characteristics; textual data provides insights into user interests and item features in detail; and graph data elucidates relationships within the item-behavior heterogeneous graphs. Our proposed model called Triple Modality Fusion (TMF) utilizes the power of LLMs to align and integrate these three modalities, achieving a comprehensive representation of user behaviors. The LLM models the user's interactions including behaviors and item features in natural languages. Initially, the LLM is warmed up using only natural language-based prompts. We then devise the modality fusion module based on cross-attention and self-attention mechanisms to integrate different modalities from other models into the same embedding space and incorporate them into an LLM. Extensive experiments demonstrate the effectiveness of our approach in improving recommendation accuracy. Further ablation studies validate the effectiveness of our model design and benefits of the TMF.
翻译:整合多样化的数据模态对于提升个性化推荐系统的性能至关重要。传统模型通常依赖单一数据源,缺乏准确捕捉物品特征与用户行为多面性所需的深度。本文提出了一种新颖的多行为推荐框架,通过与大语言模型(LLMs)对齐,融合视觉、文本和图数据这三重模态。通过融入视觉信息,我们捕捉物品的上下文与美学特征;文本数据则详细提供用户兴趣与物品特征的洞察;而图数据则阐明物品-行为异构图中的关联关系。我们提出的模型称为三重模态融合(TMF),利用LLMs的能力对齐并整合这三种模态,实现对用户行为的全面表征。LLM以自然语言对用户交互(包括行为与物品特征)进行建模。首先,仅使用基于自然语言的提示对LLM进行预热。随后,我们设计了基于交叉注意力与自注意力机制的模态融合模块,将来自其他模型的不同模态整合到同一嵌入空间,并将其融入LLM中。大量实验证明了我们的方法在提升推荐准确性方面的有效性。进一步的消融研究验证了我们模型设计的有效性以及TMF的优势。