Graph Neural Networks (GNNs), especially message-passing-based models, have become prominent in top-k recommendation tasks, outperforming matrix factorization models due to their ability to efficiently aggregate information from a broader context. Although GNNs are evaluated with ranking-based metrics, e.g NDCG@k and Recall@k, they remain largely trained with proxy losses, e.g the BPR loss. In this work we explore the use of ranking loss functions to directly optimize the evaluation metrics, an area not extensively investigated in the GNN community for collaborative filtering. We take advantage of smooth approximations of the rank to facilitate end-to-end training of GNNs and propose a Personalized PageRank-based negative sampling strategy tailored for ranking loss functions. Moreover, we extend the evaluation of GNN models for top-k recommendation tasks with an inductive user-centric protocol, providing a more accurate reflection of real-world applications. Our proposed method significantly outperforms the standard BPR loss and more advanced losses across four datasets and four recent GNN architectures while also exhibiting faster training. Demonstrating the potential of ranking loss functions in improving GNN training for collaborative filtering tasks.
翻译:图神经网络(GNNs),特别是基于消息传递的模型,在top-k推荐任务中已变得十分突出,因其能够高效聚合更广泛上下文信息的能力而优于矩阵分解模型。尽管GNNs使用基于排序的指标(如NDCG@k和Recall@k)进行评估,但其训练仍主要依赖于代理损失函数(如BPR损失)。在本工作中,我们探索使用排序损失函数直接优化评估指标,这一方向在图神经网络社区的协同过滤研究中尚未得到广泛探索。我们利用排序的光滑近似来促进GNNs的端到端训练,并提出一种专为排序损失函数设计的基于个性化PageRank的负采样策略。此外,我们通过一种归纳式的以用户为中心的评估协议扩展了GNN模型在top-k推荐任务中的评估,从而更准确地反映实际应用场景。我们提出的方法在四个数据集和四种近期GNN架构上显著优于标准BPR损失及更先进的损失函数,同时展现出更快的训练速度。这证明了排序损失函数在改进协同过滤任务中GNN训练方面的潜力。