Motivated reasoning -- the idea that individuals processing information may be motivated to reach a certain conclusion, whether it be accurate or predetermined -- has been well-explored as a human phenomenon. However, it is unclear whether base LLMs mimic these motivational changes. Replicating 4 prior political motivated reasoning studies, we find that base LLM behavior does not align with expected human behavior. Furthermore, base LLM behavior across models shares some similarities, such as smaller standard deviations and inaccurate argument strength assessments. We emphasize the importance of these findings for researchers using LLMs to automate tasks such as survey data collection and argument assessment.
翻译:动机性推理——即个体在处理信息时可能受特定结论(无论其准确性或预设性)驱动而进行认知加工的现象——作为人类心理特征已得到充分研究。然而,基础大型语言模型是否能够模拟这种动机性变化尚不明确。通过复现四项既有政治动机性推理研究,我们发现基础大型语言模型的行为模式与预期的人类行为并不一致。此外,不同基础模型之间表现出某些共性特征,例如更小的标准差以及对论证力度评估的失准。我们强调这些发现对使用大型语言模型自动化执行调查数据收集与论证评估等任务的研究者具有重要启示。