While much research has shown the presence of AI's "under-the-hood" biases (e.g., algorithmic, training data, etc.), what about "over-the-hood" inclusivity biases: barriers in user-facing AI products that disproportionately exclude users with certain problem-solving approaches? Recent research has begun to report the existence of such biases -- but what do they look like, how prevalent are they, and how can developers find and fix them? To find out, we conducted a field study with 3 AI product teams, to investigate what kinds of AI inclusivity bugs exist uniquely in user-facing AI products, and whether/how AI product teams might harness an existing (non-AI-oriented) inclusive design method to find and fix them. The teams' work resulted in identifying 6 types of AI inclusivity bugs arising 83 times, fixes covering 47 of these bug instances, and a new variation of the GenderMag inclusive design method, GenderMag-for-AI, that is especially effective at detecting certain kinds of AI inclusivity bugs.
翻译:尽管大量研究揭示了人工智能"罩下"偏见(如算法、训练数据等)的存在,但"罩上"包容性偏见——即面向用户的人工智能产品中不成比例地排斥采用特定问题解决方法的用户所面临的障碍——又如何呢?近期研究已开始报告此类偏见的存在,但它们具体表现为何种形式?其普遍程度如何?开发者又该如何发现并修复它们?为探究这些问题,我们与三个AI产品团队开展了实地研究,旨在调查面向用户的人工智能产品中独特存在哪些类型的包容性缺陷,以及AI产品团队是否/如何能利用现有的(非AI导向)包容性设计方法来发现并修复这些缺陷。团队工作最终识别出6类AI包容性缺陷(共出现83次),修复了其中47个缺陷实例,并创建了包容性设计方法GenderMag的新变体——GenderMag-for-AI,该方法在检测特定类型的AI包容性缺陷方面表现出显著成效。