Explicit labeling of online content produced by artificial intelligence (AI) is a widely discussed policy for ensuring transparency and promoting public confidence. Yet little is known about the scope of AI labeling effects on public assessments of labeled content. We contribute new evidence on this question from a survey experiment using a high-quality nationally representative probability sample (\emph{n} = 3,861). First, we demonstrate that explicit AI labeling of a news article about a proposed public policy reduces its perceived accuracy. Second, we test whether there are spillover effects in terms of policy interest, policy support, and general concerns about online misinformation. We find that AI labeling reduces interest in the policy, but neither influences support for the policy nor triggers general concerns about online misinformation. We further find that increasing the salience of AI use reduces the negative impact of AI labeling on perceived accuracy, while one-sided versus two-sided framing of the policy has no moderating effect. Overall, our findings suggest that the effects of algorithm aversion induced by AI labeling of online content are limited in scope and that transparency policies may benefit from contextualizing AI use to mitigate unintended public skepticism.
翻译:对人工智能生成的在线内容进行明确标注,是当前广泛讨论的确保透明度与提升公众信心的政策手段。然而,关于AI标注对公众评估标注内容的影响范围,目前仍缺乏充分认知。本研究通过一项采用高质量全国代表性概率样本(\emph{n} = 3,861)的调查实验,为此问题提供了新的证据。首先,我们证明对一篇关于公共政策提案的新闻文章进行显式AI标注,会降低其感知准确性。其次,我们检验了其在政策关注度、政策支持度以及对在线虚假信息的普遍担忧方面是否存在溢出效应。研究发现,AI标注会降低对政策的关注度,但既不影响对政策的支持,也不会引发对在线虚假信息的普遍忧虑。进一步分析表明,增强AI使用显著性能减轻AI标注对感知准确性的负面影响,而政策陈述的单边与双边框架则未显示调节作用。总体而言,我们的研究结果表明,由在线内容AI标注引发的算法厌恶效应影响范围有限,且透明度政策可通过阐释AI使用背景来缓解非预期的公众疑虑,从而提升政策效果。