Supervised learning systems are trained using historical data and, if the data was tainted by discrimination, they may unintentionally learn to discriminate against protected groups. We propose that fair learning methods, despite training on potentially discriminatory datasets, shall perform well on fair test datasets. Such dataset shifts crystallize application scenarios for specific fair learning methods. For instance, the removal of direct discrimination can be represented as a particular dataset shift problem. For this scenario, we propose a learning method that provably minimizes model error on fair datasets, while blindly training on datasets poisoned with direct additive discrimination. The method is compatible with existing legal systems and provides a solution to the widely discussed issue of protected groups' intersectionality by striking a balance between the protected groups. Technically, the method applies probabilistic interventions, has causal and counterfactual formulations, and is computationally lightweight - it can be used with any supervised learning model to prevent direct and indirect discrimination via proxies while maximizing model accuracy for business necessity.
翻译:监督学习系统使用历史数据进行训练,如果数据受到歧视性影响,这些系统可能会无意中学会对受保护群体产生歧视。我们提出,公平学习方法尽管在可能存在歧视的数据集上进行训练,但应在公平测试数据集上表现良好。此类数据集偏移具体体现了特定公平学习方法的适用场景。例如,消除直接歧视可被表述为一种特定的数据集偏移问题。针对这一场景,我们提出一种学习方法,可证明在公平数据集上最小化模型误差,同时仅使用受直接加性歧视污染的数据集进行训练。该方法与现有法律体系兼容,通过在受保护群体间取得平衡,为广泛讨论的受保护群体交叉性问题提供了解决方案。在技术上,该方法采用概率干预机制,具有因果与反事实的数学表述,且计算轻量——可与任何监督学习模型结合使用,在满足商业必要性最大化模型精度的同时,通过代理变量防止直接与间接歧视。