We conducted three experiments to investigate how large language models (LLMs) evaluate posterior probabilities. Our results reveal the coexistence of two modes in posterior judgment among state-of-the-art models: a normative mode, which adheres to Bayes' rule, and a representative-based mode, which relies on similarity -- paralleling human System 1 and System 2 thinking. Additionally, we observed that LLMs struggle to recall base rate information from their memory, and developing prompt engineering strategies to mitigate representative-based judgment may be challenging. We further conjecture that the dual modes of judgment may be a result of the contrastive loss function employed in reinforcement learning from human feedback. Our findings underscore the potential direction for reducing cognitive biases in LLMs and the necessity for cautious deployment of LLMs in critical areas.
翻译:我们通过三个实验探究了大语言模型(LLMs)如何评估后验概率。研究结果表明,在先进模型中,后验判断存在两种模式的共存:一种是遵循贝叶斯规则的规范模式,另一种是基于相似性的代表性模式——这类似于人类的系统1和系统2思维。此外,我们发现LLMs难以从其记忆中提取基础率信息,并且开发提示工程策略来减轻代表性判断可能具有挑战性。我们进一步推测,这种双重判断模式可能是强化学习从人类反馈中采用的对比损失函数的结果。我们的发现强调了减少LLMs认知偏差的潜在方向,以及在关键领域谨慎部署LLMs的必要性。