Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias; whether this is reflected in the decision-making process of LLM Agents remains under-explored. As LLM Agents are increasingly employed in intricate social environments, a pressing and natural question emerges: Can we utilize LLM Agents' systematic hallucinations to mirror human cognitive biases, thus exhibiting irrational social intelligence? In this paper, we probe the irrational behavior among contemporary LLM Agents by melding practical social science experiments with theoretical insights. Specifically, We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence through cognitive biases. Experimental results on CogMir subsets show that LLM Agents and humans exhibit high consistency in irrational and prosocial decision-making under uncertain conditions, underscoring the prosociality of LLM Agents as social entities and highlighting the significance of hallucination properties. Additionally, the CogMir framework demonstrates its potential as a valuable platform for encouraging more research into the social intelligence of LLM Agents.
翻译:大型语言模型(LLM)因其训练数据常包含人类偏见而被发现存在幻觉问题;这是否会反映在LLM智能体的决策过程中仍有待深入探究。随着LLM智能体日益被应用于复杂的社会环境,一个紧迫且自然的问题随之浮现:我们能否利用LLM智能体的系统性幻觉来映射人类的认知偏差,从而展现非理性的社会智能?本文通过融合实践性社会科学实验与理论洞见,探究当代LLM智能体中的非理性行为。具体而言,我们提出了CogMir,一个开放式的多LLM智能体框架,该框架利用幻觉特性,通过认知偏差来评估和增强LLM智能体的社会智能。在CogMir子集上的实验结果表明,在不确定条件下,LLM智能体与人类在非理性及亲社会决策方面表现出高度一致性,这突显了LLM智能体作为社会实体的亲社会性,并强调了幻觉特性的重要性。此外,CogMir框架展示了其作为一个宝贵平台的潜力,可激励对LLM智能体社会智能的更多研究。