This paper examines the issue of fairness in the estimation of graphical models (GMs), particularly Gaussian, Covariance, and Ising models. These models play a vital role in understanding complex relationships in high-dimensional data. However, standard GMs can result in biased outcomes, especially when the underlying data involves sensitive characteristics or protected groups. To address this, we introduce a comprehensive framework designed to reduce bias in the estimation of GMs related to protected attributes. Our approach involves the integration of the pairwise graph disparity error and a tailored loss function into a nonsmooth multi-objective optimization problem, striving to achieve fairness across different sensitive groups while maintaining the effectiveness of the GMs. Experimental evaluations on synthetic and real-world datasets demonstrate that our framework effectively mitigates bias without undermining GMs' performance.
翻译:本文研究了图模型(GMs)估计中的公平性问题,特别是高斯模型、协方差模型和伊辛模型。这些模型在理解高维数据中的复杂关系方面发挥着至关重要的作用。然而,标准的图模型可能导致有偏差的结果,尤其是在基础数据涉及敏感特征或受保护群体时。为解决此问题,我们引入了一个综合框架,旨在减少与受保护属性相关的图模型估计中的偏差。我们的方法将成对图差异误差和定制的损失函数集成到一个非光滑多目标优化问题中,力求在保持图模型有效性的同时,实现不同敏感群体间的公平性。在合成和真实数据集上的实验评估表明,我们的框架能有效减轻偏差,且不损害图模型的性能。