In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, roughly 9% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data. Yet AI and ML researchers lack guidelines for ethical research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers confirm independent ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by examining the normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research presents several distinct considerations$\unicode{x2014}$namely, participatory design, crowdsourced dataset development, and an expansive role of corporations$\unicode{x2014}$that necessitate a contextual ethics framework. To address these concerns, this manuscript outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. Overall, this paper seeks to equip technical researchers with practical knowledge for their work, and to position them for further dialogue with social scientists, behavioral researchers, and ethicists.
翻译:近年来,涉及人类参与者的研究对人工智能(AI)和机器学习(ML)的进步至关重要,尤其是在对话式、人机兼容及协作式人工智能领域。例如,近期AAAI和NeurIPS会议上约9%的出版物表明其收集了原始人类数据。然而,AI和ML研究者缺乏针对人类参与者的伦理研究实践指南。在这些AAAI和NeurIPS论文中,不到四分之一确认了独立的伦理审查、知情同意收集或参与者补偿。本文旨在通过审视AI研究与涉及人类参与者的相关领域在规范上的相似性与差异性,以弥合这一鸿沟。尽管心理学、人机交互及其他邻近领域提供了历史经验与有益见解,但AI研究呈现出若干独特的考量因素——即参与式设计、众包数据集开发以及企业角色的广泛渗透——这些因素需要一个情境化的伦理框架。为应对这些问题,本文概述了一套用于AI和ML研究中涉及人类参与者时,确保伦理与透明实践的指南。总体而言,本文旨在为技术研究者提供适用于其工作的实用知识,并促进他们与社会科学家、行为研究者和伦理学家展开进一步对话。