Introducing reasoning models into Retrieval-Augmented Generation (RAG) systems enhances task performance through step-by-step reasoning, logical consistency, and multi-step self-verification. However, recent studies have shown that reasoning models suffer from overthinking attacks, where models are tricked to generate unnecessarily high number of reasoning tokens. In this paper, we reveal that such overthinking risk can be inherited by RAG systems equipped with reasoning models, by proposing an end-to-end attack framework named Contradiction-Based Deliberation Extension (CODE). Specifically, CODE develops a multi-agent architecture to construct poisoning samples that are injected into the knowledge base. These samples 1) are highly correlated with the use query, such that can be retrieved as inputs to the reasoning model; and 2) contain contradiction between the logical and evidence layers that cause models to overthink, and are optimized to exhibit highly diverse styles. Moreover, the inference overhead of CODE is extremely difficult to detect, as no modification is needed on the user query, and the task accuracy remain unaffected. Extensive experiments on two datasets across five commercial reasoning models demonstrate that the proposed attack causes a 5.32x-24.72x increase in reasoning token consumption, without degrading task performance. Finally, we also discuss and evaluate potential countermeasures to mitigate overthinking risks.
翻译:在检索增强生成(RAG)系统中引入推理模型,通过逐步推理、逻辑一致性和多步自我验证,能够提升任务性能。然而,近期研究表明,推理模型容易受到过度思考攻击的影响,即模型被诱导生成不必要的大量推理标记。本文揭示,配备推理模型的RAG系统同样可能继承此类过度思考风险,并提出一种端到端的攻击框架,称为基于矛盾的深思扩展(CODE)。具体而言,CODE采用多智能体架构构建投毒样本,并将其注入知识库。这些样本具有以下特征:1)与用户查询高度相关,从而能够作为推理模型的输入被检索到;2)在逻辑层与证据层之间存在矛盾,导致模型陷入过度思考,且经过优化以呈现高度多样化的风格。此外,CODE的推理开销极难被检测,因为无需对用户查询进行任何修改,且任务准确率不受影响。在两个数据集上对五种商用推理模型进行的大量实验表明,所提出的攻击能够使推理标记消耗增加5.32倍至24.72倍,同时不降低任务性能。最后,本文还讨论并评估了潜在的防御措施,以减轻过度思考风险。