LLM-as-a-Judge uses a large language model (LLM) to select the best response from a set of candidates for a given question. LLM-as-a-Judge has many applications such as LLM-powered search, reinforcement learning with AI feedback (RLAIF), and tool selection. In this work, we propose JudgeDeceiver, an optimization-based prompt injection attack to LLM-as-a-Judge. JudgeDeceiver injects a carefully crafted sequence into an attacker-controlled candidate response such that LLM-as-a-Judge selects the candidate response for an attacker-chosen question no matter what other candidate responses are. Specifically, we formulate finding such sequence as an optimization problem and propose a gradient based method to approximately solve it. Our extensive evaluation shows that JudgeDeceive is highly effective, and is much more effective than existing prompt injection attacks that manually craft the injected sequences and jailbreak attacks when extended to our problem. We also show the effectiveness of JudgeDeceiver in three case studies, i.e., LLM-powered search, RLAIF, and tool selection. Moreover, we consider defenses including known-answer detection, perplexity detection, and perplexity windowed detection. Our results show these defenses are insufficient, highlighting the urgent need for developing new defense strategies.
翻译:LLM-as-a-Judge利用大语言模型从一组候选回答中为给定问题选择最佳响应。该方法在LLM驱动的搜索、基于AI反馈的强化学习以及工具选择等场景中具有广泛应用。本研究提出JudgeDeceiver,一种针对LLM-as-a-Judge的基于优化的提示注入攻击方法。该攻击通过在攻击者控制的候选响应中注入精心构造的序列,使得LLM-as-a-Judge无论其他候选响应如何,都会为攻击者指定的问题选择该候选响应。具体而言,我们将寻找此类序列的问题形式化为优化问题,并提出基于梯度的近似求解方法。大量实验表明,JudgeDeceiver具有极高攻击成功率,其效果显著优于现有需要手动构造注入序列的提示注入攻击方法,以及扩展到本问题场景下的越狱攻击。我们通过三个案例研究(LLM驱动的搜索、基于AI反馈的强化学习和工具选择)进一步验证了JudgeDeceiver的有效性。此外,我们评估了包括已知答案检测、困惑度检测和滑动窗口困惑度检测在内的防御机制。实验结果表明这些防御措施均存在不足,凸显了开发新型防御策略的迫切需求。