Fairness holds a pivotal role in the realm of machine learning, particularly when it comes to addressing groups categorised by protected attributes, e.g., gender, race. Prevailing algorithms in fair learning predominantly hinge on accessibility or estimations of these protected attributes, at least in the training process. We design a single group-blind projection map that aligns the feature distributions of both groups in the source data, achieving (demographic) group parity, without requiring values of the protected attribute for individual samples in the computation of the map, as well as its use. Instead, our approach utilises the feature distributions of the privileged and unprivileged groups in a boarder population and the essential assumption that the source data are unbiased representation of the population. We present numerical results on synthetic data and real data.
翻译:公平性在机器学习领域具有关键作用,尤其是在处理由受保护属性(如性别、种族)划分的群体时。现有公平学习算法主要依赖于这些受保护属性的可获取性或估计值,至少在训练过程中如此。我们设计了一种单一的群体盲投影映射,该映射在无需计算或使用映射时获取单个样本受保护属性值的情况下,对齐源数据中两个群体的特征分布,从而实现(人口统计)群体公平。相反,我们的方法利用了更广泛群体中特权与非特权群体的特征分布,以及源数据是群体无偏表示这一基本假设。我们在合成数据与真实数据上展示了数值结果。