Rapid progress in general-purpose AI has sparked significant interest in "red teaming," a practice of adversarial testing originating in military and cybersecurity applications. AI red teaming raises many questions about the human factor, such as how red teamers are selected, biases and blindspots in how tests are conducted, and harmful content's psychological effects on red teamers. A growing body of HCI and CSCW literature examines related practices-including data labeling, content moderation, and algorithmic auditing. However, few, if any, have investigated red teaming itself. This workshop seeks to consider the conceptual and empirical challenges associated with this practice, often rendered opaque by non-disclosure agreements. Future studies may explore topics ranging from fairness to mental health and other areas of potential harm. We aim to facilitate a community of researchers and practitioners who can begin to meet these challenges with creativity, innovation, and thoughtful reflection.
翻译:通用人工智能的快速发展引发了人们对"红队演练"的浓厚兴趣,这种对抗性测试方法起源于军事和网络安全领域。人工智能红队演练提出了许多关于人为因素的问题,例如红队成员如何选拔、测试过程中存在的偏见与盲点,以及有害内容对红队成员的心理影响。日益增长的人机交互与计算机支持的协同工作研究文献已探讨了相关实践——包括数据标注、内容审核和算法审计。然而,目前鲜有研究专门探讨红队演练本身。本次研讨会旨在探讨该实践所涉及的概念性与实证性挑战,这些挑战常因保密协议而难以透明化。未来研究可探索从公平性到心理健康及其他潜在危害领域的诸多议题。我们致力于构建一个由研究人员和实践者组成的社群,以期通过创造性、创新性和深思熟虑的反思来应对这些挑战。