Despite recent rapid progress in AI safety, current large language models remain vulnerable to adversarial attacks in multi-turn interaction settings, where attackers strategically adapt their prompts across conversation turns and pose a more critical yet realistic challenge. Existing approaches that discover safety vulnerabilities either rely on manual red-teaming with human experts or employ automated methods using pre-defined templates and human-curated attack data, with most focusing on single-turn attacks. However, these methods did not explore the vast space of possible multi-turn attacks, failing to consider novel attack trajectories that emerge from complex dialogue dynamics and strategic conversation planning. This gap is particularly critical given recent findings that LLMs exhibit significantly higher vulnerability to multi-turn attacks compared to single-turn attacks. We propose DialTree, an on-policy reinforcement learning framework integrated with tree search that autonomously discovers diverse multi-turn attack strategies by treating the dialogue as a sequential decision-making problem, enabling systematic exploration without manually curated data. Through extensive experiments, our approach not only achieves more than 44.2% higher ASR across 12 target models compared to previous state-of-the-art approaches, but also effectively uncovers new attack strategies by learning optimal dialogue policies that maximize attack success across multiple turns.
翻译:尽管近期人工智能安全领域进展迅速,当前大型语言模型在多轮交互场景中仍易受对抗性攻击。攻击者通过跨对话轮次策略性地调整提示,构成了更为关键且现实的挑战。现有发现安全漏洞的方法要么依赖专家人工红队测试,要么采用基于预定义模板和人工标注攻击数据的自动化方法,且大多聚焦于单轮攻击。然而,这些方法未能探索多轮攻击的广阔可能性空间,忽略了复杂对话动态和策略性对话规划所产生的新型攻击轨迹。鉴于最新研究发现大型语言模型对多轮攻击的脆弱性显著高于单轮攻击,这一研究空白尤为关键。我们提出DialTree——一种与树搜索结合的在线策略强化学习框架,通过将对话视为序列决策问题,在无需人工标注数据的情况下实现系统性探索,从而自主发现多样化的多轮攻击策略。大量实验表明,相较于现有最优方法,我们的方法不仅在12个目标模型上实现了超过44.2%的攻击成功率提升,还能通过学习最大化多轮攻击成功概率的最优对话策略,有效揭示新型攻击模式。