The growing interest in fair AI development is evident. The ''Leave No One Behind'' initiative urges us to address multiple and intersecting forms of inequality in accessing services, resources, and opportunities, emphasising the significance of fairness in AI. This is particularly relevant as an increasing number of AI tools are applied to decision-making processes, such as resource allocation and service scheme development, across various sectors such as health, energy, and housing. Therefore, exploring joint inequalities in these sectors is significant and valuable for thoroughly understanding overall inequality and unfairness. This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies among user-defined groups using latent class analysis. These discrepancies can be used to approximate inequality and provide valuable insights to fairness issues. We validate our approach using both proprietary and public datasets, including EVENS and Census 2021 (England & Wales) datasets, to examine cross-sectoral intersecting discrepancies among different ethnic groups. We also verify the reliability of the quantified discrepancy by conducting a correlation analysis with a government public metric. Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications. Additionally, we demonstrate how the proposed approach can be used to provide insights into the fairness of machine learning.
翻译:对公平人工智能发展的日益关注显而易见。"不让任何人掉队"倡议敦促我们解决在获取服务、资源和机会方面存在的多重交叉形式的不平等,强调了人工智能公平性的重要意义。随着越来越多的人工智能工具被应用于卫生、能源、住房等不同领域的决策过程(如资源分配和服务方案制定),这一问题显得尤为重要。因此,探索这些领域的联合不平等现象对于深入理解整体不平等与不公平具有重要价值。本研究引入了一种创新方法,利用潜在类别分析来量化用户定义群体间的跨领域交叉差异。这些差异可用于近似评估不平等程度,并为公平性问题提供有价值的见解。我们使用专有和公开数据集(包括EVENS和2021年人口普查(英格兰和威尔士)数据集)验证了该方法,以检验不同族裔群体间的跨领域交叉差异。我们还通过与政府公共指标进行相关性分析,验证了量化差异的可靠性。研究结果揭示了少数族裔群体间的显著差异,表明在实际人工智能应用中需要采取针对性干预措施。此外,我们展示了所提方法如何为机器学习公平性提供深入见解。