Large Language Models (LLMs) are increasingly employed in various question-answering tasks. However, recent studies showcase that LLMs are susceptible to persuasion and could adopt counterfactual beliefs. We present a systematic evaluation of LLM susceptibility to persuasion under the Source--Message--Channel--Receiver (SMCR) communication framework. Across five mainstream Large Language Models (LLMs) and three domains (factual knowledge, medical QA, and social bias), we analyze how different persuasive strategies influence belief stability over multiple interaction turns. We further examine whether meta-cognition prompting (i.e., eliciting self-reported confidence) affects resistance to persuasion. Results show that the smallest model (Llama 3.2-3B) exhibits extreme compliance, with 82.5% of belief changes occurring at the first persuasive turn (average end turn of 1.1--1.4). Contrary to expectations, meta-cognition prompting increases vulnerability by accelerating belief erosion rather than enhancing robustness. Finally, we evaluate adversarial fine-tuning as a defense. While GPT-4o-mini achieves near-complete robustness (98.6%) and Mistral~7B improves substantially (35.7% $\rightarrow$ 79.3%), Llama models remain highly susceptible (<14%) even when fine-tuned on their own failure cases. Together, these findings highlight substantial model-dependent limits of current robustness interventions and offer guidance for developing more trustworthy LLMs.
翻译:大型语言模型(LLMs)正日益广泛地应用于各类问答任务。然而,近期研究表明LLMs易受说服影响,可能接受与事实相悖的信念。本研究基于"信源-信息-信道-接收者"(SMCR)传播框架,对LLMs的说服脆弱性进行了系统性评估。我们在五种主流大型语言模型和三个领域(事实知识、医疗问答、社会偏见)中,分析了多轮交互过程中不同说服策略如何影响信念稳定性。进一步探究了元认知提示(即引发模型自报告信度)是否影响其抗说服能力。结果显示,最小规模的模型(Llama 3.2-3B)表现出极端顺从性,82.5%的信念改变发生在首轮说服交互中(平均终止轮次为1.1-1.4)。与预期相反,元认知提示非但未能增强鲁棒性,反而通过加速信念侵蚀增加了模型脆弱性。最后,我们评估了对抗性微调作为防御手段的效果。虽然GPT-4o-mini实现了接近完全的鲁棒性(98.6%),Mistral~7B也有显著提升(35.7% $\rightarrow$ 79.3%),但Llama系列模型即使在自身失败案例上进行微调后仍保持高度易感性(<14%)。这些发现共同揭示了当前鲁棒性干预措施存在显著的模型依赖性局限,为开发更可信赖的LLMs提供了指导。