Artificial intelligence (AI) hiring tools have revolutionized resume screening, and large language models (LLMs) have the potential to do the same. However, given the biases which are embedded within LLMs, it is unclear whether they can be used in this scenario without disadvantaging groups based on their protected attributes. In this work, we investigate the possibilities of using LLMs in a resume screening setting via a document retrieval framework that simulates job candidate selection. Using that framework, we then perform a resume audit study to determine whether a selection of Massive Text Embedding (MTE) models are biased in resume screening scenarios. We simulate this for nine occupations, using a collection of over 500 publicly available resumes and 500 job descriptions. We find that the MTEs are biased, significantly favoring White-associated names in 85.1\% of cases and female-associated names in only 11.1\% of cases, with a minority of cases showing no statistically significant differences. Further analyses show that Black males are disadvantaged in up to 100\% of cases, replicating real-world patterns of bias in employment settings, and validate three hypotheses of intersectionality. We also find an impact of document length as well as the corpus frequency of names in the selection of resumes. These findings have implications for widely used AI tools that are automating employment, fairness, and tech policy.
翻译:人工智能(AI)招聘工具已彻底改变了简历筛选流程,大型语言模型(LLMs)同样具备引发变革的潜力。然而,鉴于LLMs内部存在的固有偏见,尚不确定其在此场景中的应用是否会基于受保护属性对特定群体造成不利影响。本研究通过模拟求职者筛选的文档检索框架,探讨了在简历筛选场景中使用LLMs的可能性。基于该框架,我们开展了简历审计研究,以评估所选的大规模文本嵌入(MTE)模型在简历筛选情境中是否存在偏见。我们针对九个职业类别进行了模拟实验,使用了包含500余份公开简历和500份职位描述的数据集。研究发现,MTE模型普遍存在偏见:在85.1%的案例中显著偏向白人关联姓名,而仅11.1%的案例偏向女性关联姓名,少数案例未呈现统计学显著差异。进一步分析表明,黑人男性在高达100%的案例中处于劣势,这复现了现实就业环境中的偏见模式,并验证了关于交叉性的三个假设。研究还发现文档长度及姓名在语料库中的出现频率对简历筛选结果存在影响。这些发现对当前广泛应用于就业自动化、公平性评估和技术政策制定的人工智能工具具有重要启示。