Researchers in social science and psychology have recently proposed using large language models (LLMs) as replacements for humans in behavioral research. In addition to arguments about whether LLMs accurately capture population-level patterns, this has raised questions about whether LLMs capture human-like conceptual diversity. Separately, it is debated whether post-training alignment (RLHF or RLAIF) affects models' internal diversity. Inspired by human studies, we use a new way of measuring the conceptual diversity of synthetically-generated LLM "populations" by relating the internal variability of simulated individuals to the population-level variability. We use this approach to evaluate non-aligned and aligned LLMs on two domains with rich human behavioral data. While no model reaches human-like diversity, aligned models generally display less diversity than their instruction fine-tuned counterparts. Our findings highlight potential trade-offs between increasing models' value alignment and decreasing the diversity of their conceptual representations.
翻译:社会科学与心理学领域的研究者近期提出使用大型语言模型作为人类在行为研究中的替代品。除了关于LLMs是否准确捕捉群体层面模式的争论外,这还引发了LLMs是否具备类人概念多样性的疑问。与此同时,关于后训练对齐是否影响模型内部多样性的讨论也方兴未艾。受人类研究的启发,我们提出一种新方法来衡量合成生成的LLM"群体"的概念多样性,该方法通过关联模拟个体的内部变异与群体层面的变异来实现。我们运用此方法,在两个具有丰富人类行为数据的领域上评估了未对齐与对齐的LLMs。尽管没有任何模型达到类人的多样性水平,但对齐模型普遍表现出比其指令微调版本更低的概念多样性。我们的研究结果凸显了提升模型价值对齐与保持其概念表征多样性之间可能存在的权衡关系。