As open-weight large language models (LLMs) increase in capabilities, safeguarding them against malicious prompts and understanding possible attack vectors becomes ever more important. While automated jailbreaking methods like GCG [Zou et al., 2023] remain effective, they often require substantial computational resources and specific expertise. We introduce "sockpuppetting'', a simple method for jailbreaking open-weight LLMs by inserting an acceptance sequence (e.g., "Sure, here is how to...'') at the start of a model's output and allowing it to complete the response. Requiring only a single line of code and no optimization, sockpuppetting achieves up to 80% higher attack success rate (ASR) than GCG on Qwen3-8B in per-prompt comparisons. We also explore a hybrid approach that optimizes the adversarial suffix within the assistant message block rather than the user prompt, increasing ASR by 64% over GCG on Llama-3.1-8B in a prompt-agnostic setting. The results establish sockpuppetting as an effective low-cost attack accessible to unsophisticated adversaries, highlighting the need for defences against output-prefix injection in open-weight models.
翻译:随着开源权重大型语言模型(LLM)能力不断增强,保护其免受恶意提示攻击并理解潜在攻击向量变得愈发重要。尽管自动化越狱方法如GCG [Zou et al., 2023] 仍然有效,但它们通常需要大量计算资源和特定专业知识。本文提出“傀儡攻击”——一种通过向模型输出起始位置插入接受序列(例如“当然,以下是实现...的方法”)并允许其完成响应的简单越狱方法。该方法仅需单行代码且无需优化,在逐提示对比中,对Qwen3-8B模型的攻击成功率(ASR)比GCG最高提升80%。我们还探索了一种混合方法,在助手消息块而非用户提示内优化对抗后缀,在提示无关设置下对Llama-3.1-8B模型的ASR比GCG提高64%。研究结果表明傀儡攻击是一种可供非专业攻击者实施的高效低成本攻击手段,凸显了开源权重模型防御输出前缀注入攻击的必要性。