Group recommender systems help users make collective choices but often lack transparency, leaving group members uncertain about why items are suggested. Existing explanation methods focus on individuals, offering limited support for groups where multiple preferences interact. In this paper, we propose a framework for group counterfactual explanations, which reveal how removing specific past interactions would change a group recommendation. We formalize this concept, introduce utility and fairness measures tailored to groups, and design heuristic algorithms, such as Pareto-based filtering and grow-and-prune strategies, for efficient explanation discovery. Experiments on MovieLens and Amazon datasets show clear trade-offs: low-cost methods produce larger, less fair explanations, while other approaches yield concise and balanced results at higher cost. Furthermore, the Pareto-filtering heuristic demonstrates significant efficiency improvements in sparse settings.
翻译:群体推荐系统能够辅助用户进行集体决策,但其通常缺乏透明度,导致群体成员难以理解推荐项目的生成依据。现有的解释方法主要面向个体用户,对于多偏好交互的群体场景支持有限。本文提出一种群体反事实解释框架,通过揭示移除特定历史交互如何改变群体推荐结果来增强可解释性。我们形式化了这一概念,引入了针对群体场景设计的效用与公平性度量指标,并设计了启发式算法(如基于帕累托的过滤方法及生长-剪枝策略)以实现高效的解释发现。在MovieLens和Amazon数据集上的实验表明:低成本方法会产生更广泛但公平性较差的解释,而其他方法虽计算成本较高,却能生成更简洁均衡的结果。此外,在稀疏场景下,帕累托过滤启发式算法展现出显著的效率提升。