We revisit the problem of fair representation learning by proposing Fair Partial Least Squares (PLS) components. PLS is widely used in statistics to efficiently reduce the dimension of the data by providing representation tailored for the prediction. We propose a novel method to incorporate fairness constraints in the construction of PLS components. This new algorithm provides a feasible way to construct such features both in the linear and the non linear case using kernel embeddings. The efficiency of our method is evaluated on different datasets, and we prove its superiority with respect to standard fair PCA method.
翻译:本文通过提出公平偏最小二乘(PLS)分量,重新审视公平表示学习问题。偏最小二乘法在统计学中被广泛用于通过提供针对预测任务定制的表示来高效降低数据维度。我们提出一种新方法,将公平性约束纳入PLS分量的构建过程。该算法通过核嵌入技术,为线性和非线性场景下的特征构建提供了可行方案。我们在多个数据集上评估了该方法的有效性,并证明了其相对于标准公平主成分分析方法的优越性。