As generative AI is increasingly contributing to the spread of deceptively realistic misinformation, lawmakers have introduced regulations requiring the disclosure of AI-generated content. However, it is unclear if labels reduce the risk of users falling for AI-generated misinformation. To address this research gap, we study the effect of labels on users' perception and the implications of mislabeling, focusing on AI-generated images. We first explored users' opinions and expectations of labels using five focus groups. Although participants were wary of practical implementations, they considered labeling helpful in identifying AI-generated images and avoiding deception. Second, we conducted a survey with 1354 participants to assess how labels affect users' ability to recognize misinformation. While labels reduced participants' belief in false claims supported by AI-generated images, we found evidence of overreliance, leading to unintended side effects: Participants were more susceptible to false claims accompanied by human-made images, and were more hesitant to believe true claims illustrated with labeled AI-generated images.
翻译:随着生成式AI日益助长欺骗性逼真虚假信息的传播,立法者已出台法规要求披露AI生成内容。然而,目前尚不清楚标签是否能降低用户受AI生成虚假信息误导的风险。为填补这一研究空白,本研究聚焦AI生成图像,探究标签对用户认知的影响及错误标注的潜在后果。我们首先通过五个焦点小组探讨了用户对标签的认知与期望。尽管参与者对实际实施方案持谨慎态度,但他们普遍认为标签有助于识别AI生成图像并避免受骗。其次,我们开展了一项涉及1354名参与者的问卷调查,以评估标签如何影响用户识别虚假信息的能力。研究发现,虽然标签降低了参与者对AI生成图像所支持虚假声明的信任度,但我们也观察到过度依赖标签的现象,这导致了意料之外的副作用:参与者更容易相信配有人工制作图像的虚假声明,同时对配有标注为AI生成图像的真实声明表现出更强的迟疑态度。