The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind. In this position paper, we discuss that disparities towards marginalized communities - performance, representation, privacy, robustness, interpretability and safety - are not isolated concerns but rather interconnected elements of a cascading disparity phenomenon. We contrast foundation models with traditional models and highlight the potential for exacerbated disparity against marginalized communities. Moreover, we emphasize the unique threat of cascading impacts in foundation models, where interconnected disparities can trigger long-lasting negative consequences, specifically to the people on the margin. We define marginalized communities within the machine learning context and explore the multifaceted nature of disparities. We analyze the sources of these disparities, tracing them from data creation, training and deployment procedures to highlight the complex technical and socio-technical landscape. To mitigate the pressing crisis, we conclude with a set of calls to action to mitigate disparity at its source.
翻译:基础模型的兴起为人工智能的发展带来了巨大希望,但这一进展可能放大现有风险与不平等,使边缘化群体进一步落后。在本立场文件中,我们指出,面向边缘化群体的差异——包括性能、表征、隐私、鲁棒性、可解释性与安全性——并非孤立问题,而是级联差异现象中相互关联的要素。我们对比了基础模型与传统模型,并强调其对边缘化群体可能造成的加剧性差异。此外,我们着重分析了基础模型中级联影响的独特威胁:相互关联的差异可能引发持久性负面后果,尤其对处于社会边缘的群体造成冲击。我们在机器学习语境下界定了边缘化群体的定义,并探讨了差异的多维特性。通过追溯从数据创建、训练到部署流程中的差异来源,我们剖析了其中复杂的技术与社会技术图景。为缓解这一紧迫危机,我们最终提出一系列从源头消减差异的行动倡议。