Large language models (LLMs) appear to bias their survey answers toward certain values. Nonetheless, some argue that LLMs are too inconsistent to simulate particular values. Are they? To answer, we first define value consistency as the similarity of answers across (1) paraphrases of one question, (2) related questions under one topic, (3) multiple-choice and open-ended use-cases of one question, and (4) multilingual translations of a question to English, Chinese, German, and Japanese. We apply these measures to a few large ($>=34b$), open LLMs including llama-3, as well as gpt-4o, using eight thousand questions spanning more than 300 topics. Unlike prior work, we find that models are relatively consistent across paraphrases, use-cases, translations, and within a topic. Still, some inconsistencies remain. Models are more consistent on uncontroversial topics (e.g., in the U.S., "Thanksgiving") than on controversial ones ("euthanasia"). Base models are both more consistent compared to fine-tuned models and are uniform in their consistency across topics, while fine-tuned models are more inconsistent about some topics ("euthanasia") than others ("women's rights") like our human subjects (n=165).
翻译:大语言模型(LLMs)在回答调查问题时似乎会偏向某些价值观。然而,有观点认为LLMs过于不一致,难以模拟特定的价值观。事实果真如此吗?为解答此问题,我们首先将价值一致性定义为答案在以下四个维度上的相似性:(1) 同一问题的不同表述,(2) 同一主题下的相关问题,(3) 同一问题的多项选择与开放式应用场景,以及 (4) 问题翻译成英语、中文、德语和日语后的多语言版本。我们使用涵盖300多个主题的八千个问题,将这些衡量标准应用于包括llama-3在内的几个大型(参数量$>=34b$)开源LLM以及gpt-4o。与先前研究不同,我们发现模型在问题表述、应用场景、语言翻译以及同一主题内都表现出相对的一致性。尽管如此,仍存在一些不一致之处。模型在无争议主题(例如在美国,"感恩节")上比在有争议主题("安乐死")上更为一致。基础模型相比微调模型不仅一致性更高,而且在不同主题间的一致性也更为均匀;而微调模型在某些主题(如"安乐死")上的不一致性高于其他主题(如"女性权利"),这与我们的人类受试者(n=165)的表现相似。