We propose the use of a simple intuitive principle for measuring algorithmic classification bias: the significance of the differences in a classifier's error rates across the various demographics is inversely commensurate with the sample size required to statistically detect them. That is, if large sample sizes are required to statistically establish biased behavior, the algorithm is less biased, and vice versa. In a simple setting, we assume two distinct demographics, and non-parametric estimates of the error rates on them, e1 and e2, respectively. We use a well-known approximate formula for the sample size of the chi-squared test, and verify some basic desirable properties of the proposed measure. Next, we compare the proposed measure with two other commonly used statistics, the difference e2-e1 and the ratio e2/e1 of the error rates. We establish that the proposed measure is essentially different in that it can rank algorithms for bias differently, and we discuss some of its advantages over the other two measures. Finally, we briefly discuss how some of the desirable properties of the proposed measure emanate from fundamental characteristics of the method, rather than the approximate sample size formula we used, and thus, are expected to hold in more complex settings with more than two demographics.
翻译:我们提出一种简单直观的原则来度量算法分类偏差:分类器在不同人口统计群体间错误率差异的显著性,与统计检测这些差异所需样本量成反比。也就是说,若需要较大样本量才能统计证实存在偏差行为,则算法偏差程度较低,反之亦然。在简单设定中,我们假设存在两个不同人口统计群体,并分别获得其错误率的非参数估计值e1和e2。我们采用卡方检验样本量的经典近似公式,验证了所提出度量方法具备若干基本理想特性。随后,我们将所提度量与另外两种常用统计量——错误率差值e2-e1和错误率比值e2/e1——进行比较。研究表明所提度量具有本质区别,其可能对算法的偏差程度给出不同排序,并讨论了该度量相较于另两种度量的优势。最后,我们简要探讨了所提度量的某些理想特性源于该方法的基本特征,而非我们所采用的近似样本量公式,因此预期在更复杂(涉及两个以上人口统计群体)的场景中同样成立。