LLM-as-a-Judge uses a large language model (LLM) to select the best response from a set of candidates for a given question. LLM-as-a-Judge has many applications such as LLM-powered search, reinforcement learning with AI feedback (RLAIF), and tool selection. In this work, we propose JudgeDeceiver, an optimization-based prompt injection attack to LLM-as-a-Judge. JudgeDeceiver injects a carefully crafted sequence into an attacker-controlled candidate response such that LLM-as-a-Judge selects the candidate response for an attacker-chosen question no matter what other candidate responses are. Specifically, we formulate finding such sequence as an optimization problem and propose a gradient based method to approximately solve it. Our extensive evaluation shows that JudgeDeceive is highly effective, and is much more effective than existing prompt injection attacks that manually craft the injected sequences and jailbreak attacks when extended to our problem. We also show the effectiveness of JudgeDeceiver in three case studies, i.e., LLM-powered search, RLAIF, and tool selection. Moreover, we consider defenses including known-answer detection, perplexity detection, and perplexity windowed detection. Our results show these defenses are insufficient, highlighting the urgent need for developing new defense strategies. Our implementation is available at this repository: https://github.com/ShiJiawenwen/JudgeDeceiver.
翻译:LLM-as-a-Judge使用大型语言模型(LLM)从一组候选回答中为给定问题选择最佳回答。LLM-as-a-Judge在LLM驱动的搜索、基于AI反馈的强化学习(RLAIF)以及工具选择等众多领域具有广泛应用。本文提出JudgeDeceiver,一种针对LLM-as-a-Judge的基于优化的提示注入攻击方法。JudgeDeceiver通过向攻击者控制的候选回答中注入精心构造的序列,使得LLM-as-a-Judge无论其他候选回答如何,都会为攻击者选定的问题选择该候选回答。具体而言,我们将寻找此类序列的问题形式化为优化问题,并提出一种基于梯度的方法进行近似求解。大量实验表明,JudgeDeceiver具有极高的攻击效力,其效果显著优于现有依赖人工构造注入序列的提示注入攻击方法,以及扩展到本问题场景下的越狱攻击。我们通过三个案例研究(LLM驱动的搜索、RLAIF和工具选择)进一步验证了JudgeDeceiver的有效性。此外,我们探讨了包括已知答案检测、困惑度检测及滑动窗口困惑度检测在内的防御策略。实验结果表明这些防御措施均存在不足,凸显了开发新型防御策略的迫切需求。项目代码已开源:https://github.com/ShiJiawenwen/JudgeDeceiver。