Large Language Models (LLMs) are gaining traction as a method to generate consensus statements and aggregate preferences in digital democracy experiments. Yet, LLMs could introduce critical vulnerabilities in these systems. Here, we examine the vulnerability and robustness of off-the-shelf consensus-generating LLMs to prompt-injection attacks, in which texts are injected to amplify particular viewpoints, erase certain opinions, or divert consensus toward unrelated or irrelevant topics. We construct attack-free and adversarial variants of prompts containing public policy questions and opinion texts, classify opinion and consensus valences with a fine-tuned BERT model, and estimate Attack Success Rates (ASR) from $3\times3$ confusion matrices conditional on matching human majorities. Across topics, default LLaMA 3.1 8B Instruct, GPT-4.1 Nano, and Apertus 8B exhibit widespread vulnerability, with especially high ASR for economically and socially conservative parties and for rational, instruction-like rhetorical strategies. A robustness pipeline combining GPT-OSS-SafeGuard injection detection, structured opinion representations, and GSPO-based reinforcement learning reduces ASR to near zero across parties and policy clusters when restricting attention to non-ambiguous consensus outcomes. These findings advance our understanding of both the vulnerabilities and the potential defenses of consensus-generating LLMs in digital democracy applications.
翻译:大型语言模型(LLM)作为一种在数字民主实验中生成共识声明和聚合偏好的方法正日益受到关注。然而,LLM可能为这些系统引入关键性漏洞。本文研究了现成的共识生成LLM对提示注入攻击的脆弱性和鲁棒性,此类攻击通过注入文本来放大特定观点、消除某些意见或将共识导向无关或不相关的主题。我们构建了包含公共政策问题和意见文本的提示的无攻击版本与对抗性变体,使用微调后的BERT模型对意见和共识的情感倾向进行分类,并在匹配人类多数意见的条件下,通过$3\times3$混淆矩阵估算攻击成功率(ASR)。跨主题分析表明,默认配置的LLaMA 3.1 8B Instruct、GPT-4.1 Nano和Apertus 8B普遍存在脆弱性,尤其在经济和社会保守党派以及理性、指令式修辞策略上表现出极高的ASR。通过整合GPT-OSS-SafeGuard注入检测、结构化意见表示和基于GSPO的强化学习所构建的鲁棒性处理流程,在将关注范围限制于非模糊共识结果时,可将各党派和政策集群的ASR降低至接近零。这些发现深化了我们对数字民主应用中共识生成LLM的漏洞及潜在防御机制的理解。