How could LLMs influence our democracy? We investigate LLMs' political leanings and the potential influence of LLMs on voters by conducting multiple experiments in a U.S. presidential election context. Through a voting simulation, we first demonstrate 18 open- and closed-weight LLMs' political preference for a Democratic nominee over a Republican nominee. We show how this leaning towards the Democratic nominee becomes more pronounced in instruction-tuned models compared to their base versions by analyzing their responses to candidate-policy related questions. We further explore the potential impact of LLMs on voter choice by conducting an experiment with 935 U.S. registered voters. During the experiments, participants interacted with LLMs (Claude-3, Llama-3, and GPT-4) over five exchanges. The experiment results show a shift in voter choices towards the Democratic nominee following LLM interaction, widening the voting margin from 0.7% to 4.6%, even though LLMs were not asked to persuade users to support the Democratic nominee during the discourse. This effect is larger than many previous studies on the persuasiveness of political campaigns, which have shown minimal effects in presidential elections. Many users also expressed a desire for further political interaction with LLMs. Which aspects of LLM interactions drove these shifts in voter choice requires further study. Lastly, we explore how a safety method can make LLMs more politically neutral, while leaving some open questions.
翻译:大语言模型将如何影响我们的民主?我们在美国总统选举的背景下,通过多项实验研究了大语言模型的政治倾向及其对选民的潜在影响。首先,通过投票模拟,我们展示了18个开源和闭源权重的大语言模型对民主党候选人相较于共和党候选人的政治偏好。通过分析这些模型对候选人政策相关问题的回答,我们发现相较于其基础版本,经过指令微调的模型对民主党候选人的倾向性更为明显。我们进一步通过一项涉及935名美国注册选民的实验,探究了大语言模型对选民选择的潜在影响。在实验中,参与者与多个大语言模型(Claude-3、Llama-3和GPT-4)进行了五轮交互。实验结果显示,在与大语言模型交互后,选民选择向民主党候选人偏移,投票差距从0.7%扩大至4.6%,尽管在对话中并未要求大语言模型说服用户支持民主党候选人。这种效应超过了以往许多关于政治竞选说服力的研究结果,这些研究显示总统选举中的影响微乎其微。许多用户还表达了希望与大语言模型进行更多政治互动的意愿。大语言模型交互的哪些方面驱动了选民选择的转变,仍需进一步研究。最后,我们探讨了一种安全方法如何能使大语言模型在政治上更加中立,同时也留下了一些有待解决的问题。