Large Language Models (LLMs) are increasingly employed in various question-answering tasks. However, recent studies showcase that LLMs are susceptible to persuasion and could adopt counterfactual beliefs. We present a systematic evaluation of LLM susceptibility to persuasion under the Source--Message--Channel--Receiver (SMCR) communication framework. Across five mainstream Large Language Models (LLMs) and three domains (factual knowledge, medical QA, and social bias), we analyze how different persuasive strategies influence belief stability over multiple interaction turns. We further examine whether meta-cognition prompting (i.e., eliciting self-reported confidence) affects resistance to persuasion. Results show that smaller models exhibit extreme compliance, with over 80% of belief changes occurring at the first persuasive turn (average end turn of 1.1--1.4). Contrary to expectations, meta-cognition prompting increases vulnerability by accelerating belief erosion rather than enhancing robustness. Finally, we evaluate adversarial fine-tuning as a defense. While GPT-4o-mini achieves near-complete robustness (98.6%) and Mistral~7B improves substantially (35.7% $\rightarrow$ 79.3%), Llama models remain highly susceptible (<14%) even when fine-tuned on their own failure cases. Together, these findings highlight substantial model-dependent limits of current robustness interventions and offer guidance for developing more trustworthy LLMs.
翻译:大型语言模型(LLMs)正日益广泛应用于各类问答任务。然而,近期研究表明LLMs易受说服影响,可能接受违背事实的信念。我们在"信源-信息-渠道-接收者"(SMCR)传播框架下,对LLMs的说服脆弱性进行了系统评估。通过对五种主流大型语言模型和三个领域(事实知识、医疗问答、社会偏见)的分析,我们探究了不同说服策略如何影响多轮交互中的信念稳定性。进一步检验了元认知提示(即激发模型自报告置信度)对说服抵抗力的影响。结果显示:较小模型表现出极端顺从性,超过80%的信念改变发生在首轮说服交互中(平均终止轮次为1.1-1.4)。与预期相反,元认知提示非但未能增强鲁棒性,反而通过加速信念侵蚀增加了脆弱性。最后,我们评估了对抗性微调作为防御手段的效果。虽然GPT-4o-mini实现了接近完全的鲁棒性(98.6%),Mistral~7B获得显著提升(35.7% $\rightarrow$ 79.3%),但Llama系列模型即使在其自身失败案例上进行微调,仍保持高度易感性(<14%)。这些发现共同揭示了当前鲁棒性干预措施存在显著的模型依赖性局限,为开发更可信赖的LLMs提供了指导。