The recruitment process is crucial to an organization's ability to position itself for success, from finding qualified and well-fitting job candidates to impacting its output and culture. Therefore, over the past century, human resources experts and industrial-organizational psychologists have established hiring practices such as attracting candidates with job ads, gauging a candidate's skills with assessments, and using interview questions to assess organizational fit. However, the advent of big data and machine learning has led to a rapid transformation in the traditional recruitment process as many organizations have moved to using artificial intelligence (AI). Given the prevalence of AI-based recruitment, there is growing concern that human biases may carry over to decisions made by these systems, which can amplify the effect through systematic application. Empirical studies have identified prevalent biases in candidate ranking software and chatbot interactions, catalyzing a growing body of research dedicated to AI fairness over the last decade. This paper provides a comprehensive overview of this emerging field by discussing the types of biases encountered in AI-driven recruitment, exploring various fairness metrics and mitigation methods, and examining tools for auditing these systems. We highlight current challenges and outline future directions for developing fair AI recruitment applications, ensuring equitable candidate treatment and enhancing organizational outcomes.
翻译:招聘流程对于组织能否成功定位自身至关重要,从寻找合格且适配的职位候选人到影响其产出与文化皆是如此。因此,在过去一个世纪中,人力资源专家与工业组织心理学家已建立起包括通过招聘广告吸引候选人、运用评估工具衡量候选人技能、以及采用面试问题评估组织适配度等一系列招聘实践。然而,随着大数据与机器学习的兴起,传统招聘流程正经历快速变革,许多组织已转向采用人工智能技术。鉴于基于人工智能的招聘日益普及,人们越来越担忧人类偏见可能会延续至这些系统所做的决策中,并通过系统性应用放大其影响。实证研究已在候选人排名软件与聊天机器人交互中识别出普遍存在的偏见,这推动了近十年来致力于人工智能公平性研究的快速增长。本文通过探讨人工智能驱动招聘中遇到的偏见类型、梳理各类公平性度量与缓解方法、并检视审计这些系统的工具,对这一新兴领域进行了全面综述。我们着重分析了当前面临的挑战,并展望了开发公平人工智能招聘应用、确保候选人获得公正对待以及提升组织成果的未来研究方向。