Large Language Models (LLMs) exhibit potential for explainable recommendation systems but overlook collaborative signals, while prevailing methods treat recommendation and explanation as separate tasks, resulting in a memory footprint. We present RGCF-XRec, a hybrid framework that introduces reasoning-guided collaborative filtering (CF) knowledge into a language model to deliver explainable sequential recommendations in a single step. Theoretical grounding and empirical findings reveal that RGCF-XRec offers three key merits over leading CF-aware LLM-based methods: (1) reasoning-guided augmentation of CF knowledge through contextual prompting to discover latent preferences and interpretable reasoning paths; (2) an efficient scoring mechanism based on four dimensions: coherence, completeness, relevance, and consistency to mitigate noisy CF reasoning traces and retain high-quality explanations; (3) a unified representation learning network that encodes collaborative and semantic signals, enabling a structured prompt to condition the LLM for explainable sequential recommendation. RGCF-XRec demonstrates consistent improvements across Amazon datasets, Sports, Toys, and Beauty, comprising 642,503 user-item interactions. It improves HR@10 by 7.38\% in Sports and 4.59\% in Toys, along with ROUGE-L by 8.02\% and 3.49\%, respectively. It reduces the cold warm performance gap, achieving overall gains of 14.5\% in cold-start and 11.9\% in warm start scenarios, and enhances zero-shot HR@5 by 18.54\% in Beauty and 23.16\% in Toys, highlighting effective generalization and robustness. Moreover, RGCF-XRec achieves training efficiency with a lightweight LLaMA 3.2-3B backbone, ensuring scalability for real-world applications.
翻译:大型语言模型(LLMs)在可解释推荐系统中展现出潜力,但忽视了协同信号,而主流方法将推荐与解释视为独立任务,导致内存占用问题。本文提出RGCF-XRec,一种混合框架,通过将推理引导的协同过滤(CF)知识引入语言模型,以单步方式实现可解释的序列推荐。理论分析与实证结果表明,相较于当前主流的基于CF感知的LLM方法,RGCF-XRec具备三大优势:(1)通过上下文提示进行推理引导的CF知识增强,以挖掘潜在偏好和可解释的推理路径;(2)基于一致性、完整性、相关性和连贯性四个维度的高效评分机制,以抑制噪声CF推理轨迹并保留高质量解释;(3)统一的表示学习网络,编码协同与语义信号,构建结构化提示以引导LLM进行可解释的序列推荐。RGCF-XRec在包含642,503条用户-物品交互的Amazon数据集(Sports、Toys和Beauty)上均取得稳定提升。其在Sports和Toys数据集上的HR@10分别提升7.38%和4.59%,ROUGE-L分别提升8.02%和3.49%。该方法有效缩小冷启动与热启动的性能差距,在冷启动和热启动场景中分别实现14.5%和11.9%的整体增益,并在Beauty和Toys数据集上实现零样本HR@5提升18.54%和23.16%,凸显了其良好的泛化能力与鲁棒性。此外,RGCF-XRec采用轻量级LLaMA 3.2-3B骨干网络,在保证训练效率的同时,为实际应用提供了可扩展性。