Group recommender systems help users make collective choices but often lack transparency, leaving group members uncertain about why items are suggested. Existing explanation methods focus on individuals, offering limited support for groups where multiple preferences interact. In this paper, we propose a framework for group counterfactual explanations, which reveal how removing specific past interactions would change a group recommendation. We formalize this concept, introduce utility and fairness measures tailored to groups, and design heuristic algorithms, such as Pareto-based filtering and grow-and-prune strategies, for efficient explanation discovery. Experiments on MovieLens and Amazon datasets show clear trade-offs: low-cost methods produce larger, less fair explanations, while other approaches yield concise and balanced results at higher cost. Furthermore, the Pareto-filtering heuristic demonstrates significant efficiency improvements in sparse settings.
翻译:群体推荐系统有助于用户做出集体选择,但通常缺乏透明度,导致群体成员对项目推荐的原因感到不确定。现有的解释方法主要关注个体,对于存在多重偏好交互的群体场景支持有限。本文提出一种群体反事实解释框架,通过揭示移除特定的历史交互将如何改变群体推荐来提供解释。我们形式化了这一概念,引入了适用于群体的效用与公平性度量,并设计了启发式算法(例如基于帕累托的过滤与生长-剪枝策略)以实现高效的解释发现。在MovieLens和Amazon数据集上的实验显示出明确的权衡:低成本方法产生规模较大但公平性较差的解释,而其他方法则以更高成本生成简洁且平衡的结果。此外,在稀疏场景下,帕累托过滤启发式方法展现出显著的效率提升。