Machine Learning algorithms are ubiquitous in key decision-making contexts such as justice, healthcare and finance, which has spawned a great demand for fairness in these procedures. However, the theoretical properties of such models in relation with fairness are still poorly understood, and the intuition behind the relationship between group and individual fairness is still lacking. In this paper, we provide a theoretical framework based on Sheaf Diffusion to leverage tools based on dynamical systems and homology to model fairness. Concretely, the proposed method projects input data into a bias-free space that encodes fairness constrains, resulting in fair solutions. Furthermore, we present a collection of network topologies handling different fairness metrics, leading to a unified method capable of dealing with both individual and group bias. The resulting models have a layer of interpretability in the form of closed-form expressions for their SHAP values, consolidating their place in the responsible Artificial Intelligence landscape. Finally, these intuitions are tested on a simulation study and standard fairness benchmarks, where the proposed methods achieve satisfactory results. More concretely, the paper showcases the performance of the proposed models in terms of accuracy and fairness, studying available trade-offs on the Pareto frontier, checking the effects of changing the different hyper-parameters, and delving into the interpretation of its outputs.
翻译:机器学习算法在司法、医疗和金融等关键决策场景中无处不在,这催生了对此类程序公平性的巨大需求。然而,这些模型与公平性相关的理论性质仍鲜为人知,群体公平与个体公平之间关系的直观理解也尚属欠缺。本文提出一个基于层扩散的理论框架,利用动力系统与同调论工具对公平性进行建模。具体而言,所提方法将输入数据投影至编码公平性约束的无偏空间,从而得到公平解。此外,我们提出一系列处理不同公平性度量的网络拓扑结构,形成能同时处理个体与群体偏差的统一方法。所得模型通过其SHAP值的闭式表达式具备可解释性层级,巩固了其在负责任人工智能领域中的地位。最后,通过仿真研究与标准公平性基准测试验证了这些理论见解,所提方法均取得满意结果。具体而言,本文展示了所提模型在准确性与公平性方面的性能表现:研究帕累托前沿上的可用权衡、检验不同超参数变化的影响,并深入解读其输出结果。