Generative AI models, such as ChatGPT, will increasingly replace humans in producing output for a variety of important tasks. While much prior work has mostly focused on the improvement in the average performance of generative AI models relative to humans' performance, much less attention has been paid to the significant reduction of variance in output produced by generative AI models. In this Perspective, we demonstrate that generative AI models are inherently prone to the phenomenon of "regression toward the mean" whereby variance in output tends to shrink relative to that in real-world distributions. We discuss potential social implications of this phenomenon across three levels-societal, group, and individual-and two dimensions-material and non-material. Finally, we discuss interventions to mitigate negative effects, considering the roles of both service providers and users. Overall, this Perspective aims to raise awareness of the importance of output variance in generative AI and to foster collaborative efforts to meet the challenges posed by the reduction of variance in output generated by AI models.
翻译:生成式人工智能模型(如ChatGPT)将越来越多地替代人类完成各类重要任务的输出工作。尽管先前研究主要关注生成式AI模型相对于人类平均性能的提升,但对其输出方差的显著缩减现象关注甚少。本文通过视角分析指出,生成式AI模型本质上易受"均值回归"现象影响,其输出方差往往较真实世界分布呈现收缩态势。我们从社会、群体、个体三个层级以及物质、非物质两个维度,探讨了这一现象可能产生的社会影响。最后,我们讨论了服务提供商与用户双方可采取的干预措施以缓解负面效应。总体而言,本文旨在提升学界对生成式AI输出方差重要性的认知,并呼吁通过协同合作应对AI模型输出方差缩减带来的挑战。