As generative AI technologies find more and more real-world applications, the importance of testing their performance and safety seems paramount. "Red-teaming" has quickly become the primary approach to test AI models--prioritized by AI companies, and enshrined in AI policy and regulation. Members of red teams act as adversaries, probing AI systems to test their safety mechanisms and uncover vulnerabilities. Yet we know far too little about this work or its implications. This essay calls for collaboration between computer scientists and social scientists to study the sociotechnical systems surrounding AI technologies, including the work of red-teaming, to avoid repeating the mistakes of the recent past. We highlight the importance of understanding the values and assumptions behind red-teaming, the labor arrangements involved, and the psychological impacts on red-teamers, drawing insights from the lessons learned around the work of content moderation.
翻译:随着生成式人工智能技术在现实世界中的应用日益广泛,测试其性能与安全性显得至关重要。"红队测试"已迅速成为评估AI模型的主要方法——既受到AI公司的优先重视,也被纳入人工智能政策与法规体系。红队成员扮演对抗者角色,通过探测AI系统来检验其安全机制并发现潜在漏洞。然而,我们对此项工作及其影响知之甚少。本文呼吁计算机科学家与社会科学家开展合作,共同研究围绕人工智能技术的社会技术系统(包括红队测试工作),以避免重蹈近年来的覆辙。借鉴内容审核工作中获得的经验教训,我们着重强调理解红队测试背后的价值观与假设、所涉及的劳动组织形式以及对红队成员心理影响的重要性。