The integration of path reasoning with language modeling in recommender systems has shown promise for enhancing explainability but often struggles with the authenticity of the explanations provided. Traditional models modify their architecture to produce entities and relations alternately--for example, employing separate heads for each in the model--which does not ensure the authenticity of paths reflective of actual Knowledge Graph (KG) connections. This misalignment can lead to user distrust due to the generation of corrupted paths. Addressing this, we introduce PEARLM (Path-based Explainable-Accurate Recommender based on Language Modelling), which innovates with a Knowledge Graph Constraint Decoding (KGCD) mechanism. This mechanism ensures zero incidence of corrupted paths by enforcing adherence to valid KG connections at the decoding level, agnostic of the underlying model architecture. By integrating direct token embedding learning from KG paths, PEARLM not only guarantees the generation of plausible and verifiable explanations but also highly enhances recommendation accuracy. We validate the effectiveness of our approach through a rigorous empirical assessment, employing a newly proposed metric that quantifies the integrity of explanation paths. Our results demonstrate a significant improvement over existing methods, effectively eliminating the generation of inaccurate paths and advancing the state-of-the-art in explainable recommender systems.
翻译:将路径推理与语言建模集成于推荐系统中,虽在增强可解释性方面展现出潜力,但往往受限于所生成解释的真实性。传统模型通过修改架构交替生成实体与关系——例如在模型中为每种元素设置独立预测头——但这无法保证路径反映知识图谱中实际连接的真实性。这种错位可能导致生成错误路径,进而引发用户信任危机。为此,我们提出PEARLM(基于路径的可解释精确推荐语言模型),创新性地引入知识图谱约束解码机制。该机制在解码阶段强制遵循有效知识图谱连接,与底层模型架构无关,从而彻底消除错误路径的生成。通过直接学习知识图谱路径中的词元嵌入,PEARLM不仅确保生成合理可验证的解释,还显著提升推荐精度。我们采用新提出的解释路径完整性量化指标进行严格实证评估,验证了该方法的效果。实验结果表明,与现有方法相比,我们的方法在彻底消除不准确路径生成方面实现显著突破,推动了可解释推荐系统的最新进展。