Accurately simulating human opinion dynamics is crucial for understanding a variety of societal phenomena, including polarization and the spread of misinformation. However, the agent-based models (ABMs) commonly used for such simulations often over-simplify human behavior. We propose a new approach to simulating opinion dynamics based on populations of Large Language Models (LLMs). Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality. This bias limits their utility for understanding resistance to consensus views on issues like climate change. After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research. These insights highlight the promise and limitations of LLM agents in this domain and suggest a path forward: refining LLMs with real-world discourse to better simulate the evolution of human beliefs.
翻译:准确模拟人类舆论动态对于理解极化现象和错误信息传播等社会现象至关重要。然而,常用于此类模拟的基于智能体的模型(ABMs)往往过度简化了人类行为。我们提出了一种基于大规模语言模型(LLMs)群体模拟舆论动态的新方法。研究结果揭示了LLM智能体在生成准确信息方面存在强烈的内在偏见,这导致模拟智能体倾向于达成与科学现实一致的共识。这种偏见限制了它们在理解气候变化等议题上抵制共识性观点方面的效用。然而,通过提示工程诱导确认偏误后,我们观察到了与现有基于智能体的建模及舆论动态研究相符的观点碎片化现象。这些发现突显了LLM智能体在该领域的潜力与局限,并指明了前行的方向:通过真实世界话语对LLM进行精细化调整,以更好地模拟人类信念的演变过程。