What counts as legitimate AI ethics labor, and consequently, what are the epistemic terms on which AI ethics claims are rendered legitimate? Based on 75 interviews with technologists including researchers, developers, open source contributors, and activists, this paper explores the various epistemic bases from which AI ethics is discussed and practiced. In the context of outside attacks on AI ethics as an impediment to "progress," I show how some AI ethics practices have reached toward authority from automation and quantification, and achieved some legitimacy as a result, while those based on richly embodied and situated lived experience have not. This paper draws together the work of feminist Anthropology and Science and Technology Studies scholars Diana Forsythe and Lucy Suchman with the works of postcolonial feminist theorist Sara Ahmed and Black feminist theorist Kristie Dotson to examine the implications of dominant AI ethics practices. By entrenching the epistemic power of quantification, dominant AI ethics practices-Model Cards and similar interventions-risk legitimizing AI ethics as a project in equal and opposite measure to which they delegitimize and marginalize embodied and lived experiences as legitimate parts of the same project. In response, I propose humble technical practices: quantified or technical practices which specifically seek to make their epistemic limits clear in order to flatten hierarchies of epistemic power.
翻译:何种AI伦理劳动被视为合法,进而,AI伦理主张得以被合法化的认知条件是什么?基于对包括研究人员、开发者、开源贡献者和活动人士在内的75位技术从业者的访谈,本文探讨了讨论与践行AI伦理所依托的多种认知基础。在外部将AI伦理视为"进步"障碍的攻击背景下,我展示了某些AI伦理实践如何通过诉诸自动化和量化来获取权威,并由此获得一定合法性,而那些基于丰富具身化与情境化生活经验的实践则未能如愿。本文综合运用女性主义人类学与科学技术研究学者Diana Forsythe和Lucy Suchman的成果,以及后殖民女性主义理论家Sara Ahmed和黑人女性主义理论家Kristie Dotson的著作,审视主流AI伦理实践的意涵。通过巩固量化的认知权力,主流AI伦理实践——模型卡及类似干预措施——有可能在同等程度上使AI伦理作为一项事业合法化,同时也使其将具身化与生活经验作为同一事业的合法部分加以贬低与边缘化。对此,我提出谦逊的技术实践:即寻求明确揭示自身认知局限的量化或技术实践,从而消解认知权力之间的等级结构。