Large Language Models (LLMs) have been shown to enhance the effectiveness of enriching item descriptions, thereby improving the accuracy of recommendation systems. However, most existing approaches either rely on text-only prompting or employ basic multimodal strategies that do not fully exploit the complementary information available from both textual and visual modalities. This paper introduces a novel framework, Cross-Reflection Prompting, termed X-Reflect, designed to address these limitations by prompting Multimodal Large Language Models (MLLMs) to explicitly identify and reconcile supportive and conflicting information between text and images. By capturing nuanced insights from both modalities, this approach generates more comprehensive and contextually rich item representations. Extensive experiments conducted on two widely used benchmarks demonstrate that our method outperforms existing prompting baselines in downstream recommendation accuracy. Furthermore, we identify a U-shaped relationship between text-image dissimilarity and recommendation performance, suggesting the benefit of applying multimodal prompting selectively. To support efficient real-time inference, we also introduce X-Reflect-keyword, a lightweight variant that summarizes image content using keywords and replaces the base model with a smaller backbone, achieving nearly 50% reduction in input length while maintaining competitive performance. This work underscores the importance of integrating multimodal information and presents an effective solution for improving item understanding in multimodal recommendation systems.
翻译:大型语言模型(LLM)已被证明能够有效丰富物品描述,从而提高推荐系统的准确性。然而,现有方法大多仅依赖纯文本提示,或采用未能充分利用文本与视觉模态间互补信息的基础多模态策略。本文提出了一种新颖的框架——交叉反思提示(Cross-Reflection Prompting),命名为X-Reflect,旨在通过提示多模态大语言模型(MLLM)显式识别并调和文本与图像之间的支持性和冲突性信息,以解决上述局限。该方法通过捕捉两种模态的细微洞察,生成更全面、上下文更丰富的物品表征。在两个广泛使用的基准数据集上进行的大量实验表明,我们的方法在下游推荐准确性上优于现有的提示基线。此外,我们发现文本-图像差异性与推荐性能之间存在U形关系,这表明选择性应用多模态提示具有益处。为了支持高效的实时推理,我们还提出了X-Reflect-keyword,这是一种轻量级变体,它使用关键词总结图像内容,并用更小的骨干模型替换基础模型,在保持竞争力的性能的同时,实现了输入长度近50%的缩减。这项工作强调了整合多模态信息的重要性,并为改进多模态推荐系统中的物品理解提供了一种有效的解决方案。