This position paper examines the use of Large Language Models (LLMs) in social simulation, analyzing their potential and limitations from a computational social science perspective. We first review recent findings on LLMs' ability to replicate key aspects of human cognition, including Theory of Mind reasoning and social inference, while identifying persistent limitations such as cognitive biases, lack of grounded understanding, and behavioral inconsistencies. We then survey emerging applications of LLMs in multi-agent simulation frameworks, examining system architectures, scalability, and validation strategies. Projects such as Generative Agents (Smallville) and AgentSociety are analyzed with respect to their empirical grounding and methodological design. Particular attention is given to the challenges of behavioral fidelity, calibration, and reproducibility in large-scale LLM-driven simulations. Finally, we distinguish between contexts where LLM-based agents provide operational value-such as interactive simulations and serious games-and contexts where their use raises epistemic concerns, particularly in explanatory or predictive modeling. We argue that hybrid approaches integrating LLMs into established agent-based modeling platforms such as GAMA and NetLogo may offer a promising compromise between expressive flexibility and analytical transparency. Building on this analysis, we outline a conceptual research direction termed Hybrid Constitutional Architectures, which proposes a stratified integration of classical agent-based models (ABMs), small language models (SLMs), and LLMs within established platforms such as GAMA and NetLogo.
翻译:本立场论文探讨大型语言模型(LLM)在社会仿真中的应用,从计算社会科学视角分析其潜力与局限。我们首先综述了近期关于LLM复现人类认知关键特征(包括心理理论推理与社会推断)的研究发现,同时指出其持续存在的局限性,如认知偏差、缺乏具身理解及行为不一致性。随后,我们系统梳理了LLM在多主体仿真框架中的新兴应用,考察其系统架构、可扩展性与验证策略。本文以实证基础与方法设计为切入点,分析了生成式智能体(Smallville)与AgentSociety等项目。特别关注大规模LLM驱动仿真中行为保真度、校准与可复现性方面的挑战。最后,我们区分了LLM智能体具有操作价值的场景(如交互式仿真与严肃游戏)与其使用会引发认知论担忧的场景(尤指解释性或预测性建模)。我们认为,将LLM整合至成熟基于主体建模平台(如GAMA和NetLogo)的混合方法,可能在表达灵活性与分析透明度之间提供有前景的折衷方案。基于此分析,我们提出了名为“混合宪法架构”的概念研究方向,该方向主张在GAMA、NetLogo等成熟平台内,对经典基于主体模型(ABM)、小型语言模型(SLM)与LLM进行分层集成。