Large reasoning models with reasoning capabilities achieve state-of-the-art performance on complex tasks, but their robustness under multi-turn adversarial pressure remains underexplored. We evaluate nine frontier reasoning models under adversarial attacks. Our findings reveal that reasoning confers meaningful but incomplete robustness: most reasoning models studied significantly outperform instruction-tuned baselines, yet all exhibit distinct vulnerability profiles, with misleading suggestions universally effective and social pressure showing model-specific efficacy. Through trajectory analysis, we identify five failure modes (Self-Doubt, Social Conformity, Suggestion Hijacking, Emotional Susceptibility, and Reasoning Fatigue) with the first two accounting for 50% of failures. We further demonstrate that Confidence-Aware Response Generation (CARG), effective for standard LLMs, fails for reasoning models due to overconfidence induced by extended reasoning traces; counterintuitively, random confidence embedding outperforms targeted extraction. Our results highlight that reasoning capabilities do not automatically confer adversarial robustness and that confidence-based defenses require fundamental redesign for reasoning models.
翻译:具备推理能力的大型推理模型在复杂任务上实现了最先进的性能,但其在多轮对抗性压力下的鲁棒性仍未得到充分探索。我们评估了九种前沿推理模型在对抗性攻击下的表现。研究发现,推理能力能带来显著但不完全的鲁棒性:大多数被研究的推理模型显著优于指令微调的基线模型,但所有模型都表现出独特的脆弱性特征,其中误导性建议普遍有效,而社会压力则显示出模型特定的效力。通过轨迹分析,我们识别出五种失效模式(自我怀疑、社会从众、建议劫持、情感易感性及推理疲劳),其中前两种模式导致了50%的失效案例。我们进一步证明,对标准大语言模型有效的置信感知响应生成方法(CARG)不适用于推理模型,这是因为扩展推理轨迹会引发过度自信;反直觉的是,随机置信嵌入的表现优于针对性提取。我们的研究结果强调,推理能力不会自动赋予对抗鲁棒性,且基于置信度的防御机制需要针对推理模型进行根本性的重新设计。