Society is increasingly relying on predictive models in fields like criminal justice, credit risk management, or hiring. To prevent such automated systems from discriminating against people belonging to certain groups, fairness measures have become a crucial component in socially relevant applications of machine learning. However, existing fairness measures have been designed to assess the bias between predictions for protected groups without considering the imbalance in the classes of the target variable. Current research on the potential effect of class imbalance on fairness focuses on practical applications rather than dataset-independent measure properties. In this paper, we study the general properties of fairness measures for changing class and protected group proportions. For this purpose, we analyze the probability mass functions of six of the most popular group fairness measures. We also measure how the probability of achieving perfect fairness changes for varying class imbalance ratios. Moreover, we relate the dataset-independent properties of fairness measures described in this paper to classifier fairness in real-life tasks. Our results show that measures such as Equal Opportunity and Positive Predictive Parity are more sensitive to changes in class imbalance than Accuracy Equality. These findings can help guide researchers and practitioners in choosing the most appropriate fairness measures for their classification problems.
翻译:社会在刑事司法、信用风险管理或招聘等领域日益依赖预测模型。为防止此类自动化系统对特定群体产生歧视,公平性度量已成为机器学习社会相关应用中的关键组成部分。然而,现有公平性度量在设计时仅评估受保护群体间预测结果的偏差,未考虑目标变量类别的不平衡问题。当前关于类别不平衡对公平性潜在影响的研究多聚焦实际应用,而非独立于数据集的度量性质。本文研究了类别与受保护群体比例变化时公平性度量的一般性质。为此,我们分析了六种最常用的群体公平性度量的概率质量函数,并测量了类别不平衡比例变化时达成完全公平的概率变化。此外,本文将描述的公平性度量独立于数据集的特性与现实任务中的分类器公平性相关联。研究结果表明,相较于准确率平等性,机会均等与正预测率对类别不平衡变化的敏感性更高。这些发现可为研究者和实践者选择最适合其分类问题的公平性度量提供指导。