Large Language Models (LLMs) are widely integrated into interactive systems such as dialogue agents and task-oriented assistants. This growing ecosystem also raises supply-chain risks, where adversaries can distribute poisoned models that degrade downstream reliability and user trust. Existing backdoor attacks and defenses are largely prompt-centric, focusing on user-visible triggers while overlooking structural signals in multi-turn conversations. We propose Turn-based Structural Trigger (TST), a backdoor attack that activates from dialogue structure, using the turn index as the trigger and remaining independent of user inputs. Across four widely used open-source LLM models, TST achieves an average attack success rate (ASR) of 99.52% with minimal utility degradation, and remains effective under five representative defenses with an average ASR of 98.04%. The attack also generalizes well across instruction datasets, maintaining an average ASR of 99.19%. Our results suggest that dialogue structure constitutes an important and under-studied attack surface for multi-turn LLM systems, motivating structure-aware auditing and mitigation in practice.
翻译:大型语言模型(LLMs)已被广泛应用于对话代理和任务导向助手等交互式系统中。这一日益增长的生态系统也带来了供应链风险,攻击者可能分发被投毒模型,从而降低下游系统的可靠性与用户信任。现有的后门攻击与防御方法主要围绕提示展开,关注用户可见的触发器,却忽视了多轮对话中的结构化信号。本文提出基于轮次的结构化触发器(TST),这是一种通过对话结构激活的后门攻击,其以轮次索引作为触发器,且完全独立于用户输入。在四个广泛使用的开源LLM模型上,TST实现了平均99.52%的攻击成功率(ASR),同时保持极低的效用损失;在五种典型防御方法下仍保持平均98.04%的ASR。该攻击在不同指令数据集上也表现出良好的泛化能力,平均ASR达99.19%。我们的研究结果表明,对话结构构成了多轮LLM系统中一个重要且尚未被充分研究的攻击面,这为实践中开展结构感知的审计与缓解措施提供了重要依据。