Efforts in the recommendation community are shifting from the sole emphasis on utility to considering beyond-utility factors, such as fairness and robustness. Robustness of recommendation models is typically linked to their ability to maintain the original utility when subjected to attacks. Limited research has explored the robustness of a recommendation model in terms of fairness, e.g., the parity in performance across groups, under attack scenarios. In this paper, we aim to assess the robustness of graph-based recommender systems concerning fairness, when exposed to attacks based on edge-level perturbations. To this end, we considered four different fairness operationalizations, including both consumer and provider perspectives. Experiments on three datasets shed light on the impact of perturbations on the targeted fairness notion, uncovering key shortcomings in existing evaluation protocols for robustness. As an example, we observed perturbations affect consumer fairness on a higher extent than provider fairness, with alarming unfairness for the former. Source code: https://github.com/jackmedda/CPFairRobust
翻译:推荐社区的研究重点正从单纯强调效用转向考虑效用之外的因素,如公平性和鲁棒性。推荐模型的鲁棒性通常与其在遭受攻击时维持原始效用的能力相关。目前有限的研究探讨了推荐模型在公平性方面的鲁棒性,例如在攻击场景下跨群体间的性能均等性。本文旨在评估基于图的推荐系统在面对边缘级扰动攻击时,在公平性方面的鲁棒性。为此,我们考虑了四种不同的公平性操作化定义,包括消费者和提供者两个视角。在三个数据集上的实验揭示了扰动对特定公平性概念的影响,暴露了现有鲁棒性评估协议的关键缺陷。例如,我们观察到扰动对消费者公平性的影响程度高于提供者公平性,前者的不公平现象令人担忧。源代码:https://github.com/jackmedda/CPFairRobust