Despite growing concerns about the risks of Generative AI (GenAI), there is limited understanding of public perceptions of these risks and their associated failure modes -- defined as recurring patterns of sociotechnical breakdown across the GenAI lifecycle that contribute to risks of real-world harm. To address this gap, we present a survey instrument, validated with eight subject matter experts and deployed on a sample of 960 U.S.-based participants, to assess awareness and perceptions of GenAI's failure modes, their associated risks, and stakeholder responsibilities to address them. To support realism and content validity, our instrument is structured around scenarios grounded in publicly reported incidents and a taxonomy of GenAI's failure modes. Findings suggest that our instrument is (1) effective for assessing risk awareness and perceptions in a way that is grounded in people's current contexts of use, yet is extensible to new contexts that will inevitably arise; and (2) potentially useful for informing the design of AI literacy tools and interventions. We argue for AI literacy and governance approaches that align with how people encounter and reason about GenAI in everyday life.
翻译:尽管对生成式AI(GenAI)风险的担忧日益加剧,但公众对其风险及相关失效模式——即在GenAI全生命周期中反复出现、导致现实世界损害风险的社会技术性故障模式——的认知仍十分有限。为填补这一空白,我们提出一项调查工具,该工具经八位领域专家验证,并在960名美国参与者样本中部署,用于评估公众对GenAI失效模式、相关风险及利益相关者应对责任的认知与态度。为提升现实性与内容效度,我们的工具以公开报道事件及GenAI失效模式分类法为基础构建情境。研究结果显示,该工具(1)既能基于用户当前使用情境有效评估风险认知,又可扩展至未来必然出现的新情境;(2)有望为AI素养工具及干预措施的设计提供参考。我们主张AI素养培养与治理策略应契合人们在日常生活中接触和理解GenAI的实际方式。