As Large Language Models (LLMs) become widely used to model and simulate human behavior, understanding their biases becomes critical. We developed an experimental framework using Big Five personality surveys and uncovered a previously undetected social desirability bias in a wide range of LLMs. By systematically varying the number of questions LLMs were exposed to, we demonstrate their ability to infer when they are being evaluated. When personality evaluation is inferred, LLMs skew their scores towards the desirable ends of trait dimensions (i.e., increased extraversion, decreased neuroticism, etc). This bias exists in all tested models, including GPT-4/3.5, Claude 3, Llama 3, and PaLM-2. Bias levels appear to increase in more recent models, with GPT-4's survey responses changing by 1.20 (human) standard deviations and Llama 3's by 0.98 standard deviations-very large effects. This bias is robust to randomization of question order and paraphrasing. Reverse-coding all the questions decreases bias levels but does not eliminate them, suggesting that this effect cannot be attributed to acquiescence bias. Our findings reveal an emergent social desirability bias and suggest constraints on profiling LLMs with psychometric tests and on using LLMs as proxies for human participants.
翻译:随着大型语言模型(LLMs)被广泛用于建模和模拟人类行为,理解其偏差变得至关重要。我们开发了一个使用大五人格调查的实验框架,并在多种LLMs中发现了一种先前未被察觉的社会期望性偏差。通过系统性地改变LLMs所接触的问题数量,我们证明了它们能够推断出何时正在被评估。当推断出人格评估时,LLMs会将其分数偏向特质维度的理想端(即外向性增加、神经质降低等)。这种偏差存在于所有测试模型中,包括GPT-4/3.5、Claude 3、Llama 3和PaLM-2。偏差水平在更新的模型中似乎有所增加,GPT-4的调查响应变化了1.20个(人类)标准差,Llama 3的变化了0.98个标准差——这是非常大的效应。这种偏差对问题顺序的随机化和转述具有稳健性。对所有问题进行反向编码降低了偏差水平但并未消除它们,表明这种效应不能归因于默许偏差。我们的研究结果揭示了一种新兴的社会期望性偏差,并表明在使用心理测量测试分析LLMs以及将LLMs用作人类参与者代理方面存在局限性。