In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, around 12% and 6% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data, respectively. Yet AI and ML researchers lack guidelines for ethical, transparent research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers provide details of ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by exploring normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research raises several specific concerns$\unicode{x2014}$namely, participatory design, crowdsourced dataset development, and an expansive role of corporations$\unicode{x2014}$that necessitate a contextual ethics framework. To address these concerns, this paper outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. These guidelines can be found in Section 4 on pp. 4$\unicode{x2013}$7.
翻译:近年来,涉及人类参与者的研究对人工智能(AI)和机器学习(ML)的进步至关重要,尤其在对话式、人机协同及合作式AI领域。例如,近期AAAI和NeurIPS会议上约12%和6%的论文表明收集了原始人类数据。然而,AI和ML研究者缺乏针对人类参与者开展伦理、透明研究实践的指导方针。在这些AAAI和NeurIPS论文中,不足四分之一提供了伦理审查、知情同意收集或参与者补偿的细节。本文旨在通过探究AI研究与涉及人类参与者的相关领域在规范上的异同来弥合这一差距。尽管心理学、人机交互及其他邻近领域提供了历史经验与有益洞见,但AI研究引发了几项特定关切——即参与式设计、众包数据集开发以及企业的广泛作用——这些都需要一个情境化的伦理框架。为应对这些关切,本文提出了一套针对AI和ML研究中人类参与者伦理与透明实践的指导方针。该方针见第4节第4–7页。