As the adoption of machine learning (ML) systems continues to grow across industries, concerns about fairness and bias in these systems have taken center stage. Fairness toolkits, designed to mitigate bias in ML models, serve as critical tools for addressing these ethical concerns. However, their adoption in the context of software development remains underexplored, especially regarding the cognitive and behavioral factors driving their usage. As a deeper understanding of these factors could be pivotal in refining tool designs and promoting broader adoption, this study investigates the factors influencing the adoption of fairness toolkits from an individual perspective. Guided by the Unified Theory of Acceptance and Use of Technology (UTAUT2), we examined the factors shaping the intention to adopt and actual use of fairness toolkits. Specifically, we employed Partial Least Squares Structural Equation Modeling (PLS-SEM) to analyze data from a survey study involving practitioners in the software industry. Our findings reveal that performance expectancy and habit are the primary drivers of fairness toolkit adoption. These insights suggest that by emphasizing the effectiveness of these tools in mitigating bias and fostering habitual use, organizations can encourage wider adoption. Practical recommendations include improving toolkit usability, integrating bias mitigation processes into routine development workflows, and providing ongoing support to ensure professionals see clear benefits from regular use.
翻译:随着机器学习系统在各行业的应用日益广泛,这些系统中的公平性与偏见问题已成为关注焦点。旨在减轻机器学习模型偏见的公平性工具包,成为解决这些伦理关切的关键工具。然而,其在软件开发背景下的采用情况仍未得到充分探索,特别是关于驱动其使用的认知与行为因素。由于深入理解这些因素对于改进工具设计和促进更广泛采用至关重要,本研究从个体视角探究了影响公平性工具包采用的因素。以技术接受与使用统一理论为指导,我们考察了影响公平性工具包采用意向与实际使用的因素。具体而言,我们采用偏最小二乘结构方程模型,对来自软件行业从业者的调查研究数据进行了分析。研究结果表明,绩效期望与习惯是驱动公平性工具包采用的主要因素。这些发现表明,通过强调这些工具在减轻偏见方面的有效性并培养使用习惯,组织可以推动更广泛的采用。实践建议包括提升工具包的可用性、将偏见缓解流程整合到常规开发工作流中,并提供持续支持以确保专业人员从常规使用中获得明确收益。