This paper evaluates whether large language models (LLMs) exhibit cognitive fan effects, similar to those discovered by Anderson in humans, after being pre-trained on human textual data. We conduct two sets of in-context recall experiments designed to elicit fan effects. Consistent with human results, we find that LLM recall uncertainty, measured via token probability, is influenced by the fan effect. Our results show that removing uncertainty disrupts the observed effect. The experiments suggest the fan effect is consistent whether the fan value is induced in-context or in the pre-training data. Finally, these findings provide in-silico evidence that fan effects and typicality are expressions of the same phenomena.
翻译:本文评估了大型语言模型(LLMs)在经过人类文本数据预训练后,是否表现出类似于安德森在人类中发现的认识扇效应。我们设计了两组旨在引发扇效应的上下文回忆实验。与人类实验结果一致,我们发现通过词元概率衡量的LLM回忆不确定性受到扇效应的影响。我们的结果表明,消除不确定性会破坏观察到的效应。实验表明,无论扇值是在上下文中诱导还是在预训练数据中形成,扇效应都是一致的。最后,这些发现为扇效应与典型性是同一现象的表现提供了计算证据。