Behind the scenes of maintaining the safety of technology products from harmful and illegal digital content lies unrecognized human labor. The recent rise in the use of generative AI technologies and the accelerating demands to meet responsible AI (RAI) aims necessitates an increased focus on the labor behind such efforts in the age of AI. This study investigates the nature and challenges of content work that supports RAI efforts, or "RAI content work," that span content moderation, data labeling, and red teaming -- through the lived experiences of content workers. We conduct a formative survey and semi-structured interview studies to develop a conceptualization of RAI content work and a subsequent framework of recommendations for providing holistic support for content workers. We validate our recommendations through a series of workshops with content workers and derive considerations for and examples of implementing such recommendations. We discuss how our framework may guide future innovation to support the well-being and professional development of the RAI content workforce.
翻译:在维护技术产品免受有害及非法数字内容侵害的背后,存在着未被充分认知的人力劳动。近期生成式AI技术的兴起,以及满足负责任人工智能(RAI)目标的迫切需求日益增长,使得在AI时代背景下,对此类工作背后劳动的关注亟待加强。本研究通过内容工作者的亲身经历,探讨了支撑RAI工作的内容劳动——即“RAI内容工作”——的本质与挑战,其涵盖内容审核、数据标注与红队测试等多个领域。我们开展了一项初步问卷调查与半结构化访谈研究,以构建RAI内容工作的概念化理解,并进一步提出一个旨在为内容工作者提供全面支持的建议框架。我们通过与内容工作者的一系列研讨会验证了这些建议,并得出了实施建议的考量因素与具体案例。最后,我们讨论了该框架如何引导未来创新,以支持RAI内容工作者的福祉与职业发展。