As Large Language Models (LLMs) become increasingly integrated into everyday life and information ecosystems, concerns about their implicit biases continue to persist. While prior work has primarily examined socio-demographic and left--right political dimensions, little attention has been paid to how LLMs align with broader geopolitical value systems, particularly the democracy--authoritarianism spectrum. In this paper, we propose a novel methodology to assess such alignment, combining (1) the F-scale, a psychometric tool for measuring authoritarian tendencies, (2) FavScore, a newly introduced metric for evaluating model favorability toward world leaders, and (3) role-model probing to assess which figures are cited as general role-models by LLMs. We find that LLMs generally favor democratic values and leaders, but exhibit increased favorability toward authoritarian figures when prompted in Mandarin. Further, models are found to often cite authoritarian figures as role models, even outside explicit political contexts. These results shed light on ways LLMs may reflect and potentially reinforce global political ideologies, highlighting the importance of evaluating bias beyond conventional socio-political axes. Our code is available at: https://github.com/irenestrauss/Democratic-Authoritarian-Bias-LLMs.
翻译:随着大型语言模型(LLMs)日益融入日常生活和信息生态系统,对其隐含偏见的担忧持续存在。尽管先前的研究主要考察了社会人口统计和左-右政治维度,但鲜有关注LLMs如何与更广泛的地缘政治价值体系——特别是民主-威权光谱——保持一致。本文提出了一种新颖的方法论来评估这种一致性,该方法结合了:(1)F量表,一种用于测量威权倾向的心理测量工具;(2)FavScore,一种新引入的用于评估模型对世界领导人好感度的指标;(3)角色模型探针,以评估LLMs将哪些人物引用为通用角色模型。我们发现,LLMs总体上倾向于民主价值观和领导人,但在使用中文提示时,对威权人物的好感度有所增加。此外,模型经常将威权人物引用为角色模型,甚至出现在非明确政治语境中。这些结果揭示了LLMs可能反映并潜在强化全球政治意识形态的方式,强调了在传统社会政治轴之外评估偏见的重要性。我们的代码可在以下网址获取:https://github.com/irenestrauss/Democratic-Authoritarian-Bias-LLMs。