LLM-as-a-Judge uses a large language model (LLM) to select the best response from a set of candidates for a given question. LLM-as-a-Judge has many applications such as LLM-powered search, reinforcement learning with AI feedback (RLAIF), and tool selection. In this work, we propose JudgeDeceiver, an optimization-based prompt injection attack to LLM-as-a-Judge. JudgeDeceiver injects a carefully crafted sequence into an attacker-controlled candidate response such that LLM-as-a-Judge selects the candidate response for an attacker-chosen question no matter what other candidate responses are. Specifically, we formulate finding such sequence as an optimization problem and propose a gradient based method to approximately solve it. Our extensive evaluation shows that JudgeDeceive is highly effective, and is much more effective than existing prompt injection attacks that manually craft the injected sequences and jailbreak attacks when extended to our problem. We also show the effectiveness of JudgeDeceiver in three case studies, i.e., LLM-powered search, RLAIF, and tool selection. Moreover, we consider defenses including known-answer detection, perplexity detection, and perplexity windowed detection. Our results show these defenses are insufficient, highlighting the urgent need for developing new defense strategies. Our implementation is available at this repository: https://github.com/ShiJiawenwen/JudgeDeceiver.
翻译:LLM-as-a-Judge使用大语言模型(LLM)从一组候选回答中为给定问题选择最佳回答。LLM-as-a-Judge具有许多应用,例如LLM驱动的搜索、基于AI反馈的强化学习(RLAIF)以及工具选择。在本工作中,我们提出了JudgeDeceiver,一种针对LLM-as-a-Judge的基于优化的提示注入攻击。JudgeDeceiver将一个精心构造的序列注入到攻击者控制的候选回答中,使得无论其他候选回答如何,LLM-as-a-Judge都会为攻击者选择的问题选择该候选回答。具体而言,我们将寻找此类序列表述为一个优化问题,并提出一种基于梯度的方法来近似求解。我们广泛的评估表明,JudgeDeceiver非常有效,并且比现有的手动构造注入序列的提示注入攻击以及扩展到我们问题时的越狱攻击要有效得多。我们还通过三个案例研究展示了JudgeDeceiver的有效性,即LLM驱动的搜索、RLAIF和工具选择。此外,我们考虑了包括已知答案检测、困惑度检测和困惑度窗口检测在内的防御措施。我们的结果表明这些防御措施不足,凸显了开发新防御策略的紧迫性。我们的实现可在以下代码库获取:https://github.com/ShiJiawenwen/JudgeDeceiver。