Among the existing approaches for visual-based Recommender System (RS) explainability, utilizing user-uploaded item images as efficient, trustable explanations is a promising option. However, current models following this paradigm assume that, for any user, all images uploaded by other users can be considered negative training examples (i.e. bad explanatory images), an inadvertedly naive labelling assumption that contradicts the rationale of the approach. This work proposes a new explainer training pipeline by leveraging Positive-Unlabelled (PU) Learning techniques to train image-based explainer with refined subsets of reliable negative examples for each user selected through a novel user-personalized, two-step, similarity-based PU Learning algorithm. Computational experiments show this PU-based approach outperforms the state-of-the-art non-PU method in six popular real-world datasets, proving that an improvement of visual-based RS explainability can be achieved by maximizing training data quality rather than increasing model complexity.
翻译:在现有基于视觉的推荐系统可解释性方法中,利用用户上传的商品图像作为高效、可信的解释是一种具有前景的方案。然而,遵循此范式的现有模型均假设:对于任意用户,其他用户上传的所有图像均可视为负例训练样本(即劣质解释性图像)。这种无意识的朴素标注假设与该方法的理论基础相矛盾。本研究提出一种新的解释器训练流程,通过运用正例-未标记学习技术,为每位用户筛选经过精炼的可靠负例子集来训练基于图像的解释器。该子集通过一种新颖的、基于用户个性化的两步式相似度正例-未标记学习算法进行选择。计算实验表明,这种基于正例-未标记学习的方法在六个流行真实数据集上均优于当前最先进的非正例-未标记学习方法,证明通过最大化训练数据质量而非增加模型复杂度,能够实现基于视觉的推荐系统可解释性的提升。