Multi-turn interaction length is a dominant factor in the operational costs of conversational LLMs. In this work, we present a new failure mode in conversational LLMs: turn amplification, in which a model consistently prolongs multi-turn interactions without completing the underlying task. We show that an adversary can systematically exploit clarification-seeking behavior$-$commonly encouraged in multi-turn conversation settings$-$to scalably prolong interactions. Moving beyond prompt-level behaviors, we take a mechanistic perspective and identify a query-independent, universal activation subspace associated with clarification-seeking responses. Unlike prior cost-amplification attacks that rely on per-turn prompt optimization, our attack arises from conversational dynamics and persists across prompts and tasks. We show that this mechanism provides a scalable pathway to induce turn amplification: both supply-chain attacks via fine-tuning and runtime attacks through low-level parameter corruptions consistently shift models toward abstract, clarification-seeking behavior across prompts. Across multiple instruction-tuned LLMs and benchmarks, our attack substantially increases turn count while remaining compliant. We also show that existing defenses offer limited protection against this emerging class of failures.
翻译:多轮交互长度是对话大语言模型运营成本的主导因素。本研究揭示了一种对话大语言模型中的新型失效模式:轮次放大,即模型持续延长多轮交互却未能完成底层任务。我们证明,攻击者可以系统性地利用澄清寻求行为——这通常在多轮对话设置中被鼓励——来可扩展地延长交互。超越提示层面的行为,我们从机制视角出发,识别出一个与澄清寻求响应相关的、独立于具体查询的通用激活子空间。与以往依赖每轮提示优化的成本放大攻击不同,我们的攻击源于对话动态,并在不同提示和任务中持续存在。研究表明,该机制为诱导轮次放大提供了可扩展的途径:通过微调实施的供应链攻击与通过低级参数破坏实施的运行时攻击,均能持续地将模型推向跨提示的抽象澄清寻求行为。在多个指令微调大语言模型和基准测试中,我们的攻击在保持合规性的同时显著增加了交互轮次。我们还证明,现有防御措施对此类新兴失效模式的防护能力有限。