We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approach can optimize any measure based on True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. (iv) We jointly optimize a performance objective alongside fairness constraints. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines. OxonFair is compatible with standard ML toolkits, including sklearn, Autogluon, and PyTorch and is available at https://github.com/oxfordinternetinstitute/oxonfair
翻译:本文介绍OxonFair,一种用于在二分类中实现公平性的新型开源工具包。与现有工具包相比:(i)我们支持自然语言处理和计算机视觉分类以及标准表格数据问题。(ii)我们支持在验证数据上实施公平性约束,从而能够有效应对各类过拟合挑战。(iii)我们的方法能够优化基于真正例、假正例、假反例和真反例的任何度量指标。这使得该工具包具备高度可扩展性,且表达能力远超现有工具包。它完整支持两篇权威综述文章中提出的全部9种及全部10种基于决策的群体公平性指标。(iv)我们在施加公平性约束的同时联合优化性能目标。这能在保证公平性的前提下最小化性能损失,甚至能提升未充分调优的不公平基线的性能。OxonFair兼容标准机器学习工具包,包括sklearn、Autogluon和PyTorch,可通过https://github.com/oxfordinternetinstitute/oxonfair 获取。