We study how to subvert large language models (LLMs) from following prompt-specified rules. We first formalize rule-following as inference in propositional Horn logic, a mathematical system in which rules have the form "if $P$ and $Q$, then $R$" for some propositions $P$, $Q$, and $R$. Next, we prove that although small transformers can faithfully follow such rules, maliciously crafted prompts can still mislead both theoretical constructions and models learned from data. Furthermore, we demonstrate that popular attack algorithms on LLMs find adversarial prompts and induce attention patterns that align with our theory. Our novel logic-based framework provides a foundation for studying LLMs in rule-based settings, enabling a formal analysis of tasks like logical reasoning and jailbreak attacks.
翻译:本研究探讨如何使大型语言模型(LLM)偏离提示指定的规则。我们首先将规则遵循形式化为命题Horn逻辑中的推理——一种数学系统,其中规则具有“若$P$与$Q$,则$R$”的形式($P$、$Q$、$R$为特定命题)。随后我们证明,尽管小型Transformer架构能够忠实地遵循此类规则,但恶意构建的提示仍可误导理论构造及从数据中习得的模型。此外,我们通过实验表明,当前主流的LLM攻击算法能够发现对抗性提示,并诱导出与理论预测相吻合的注意力模式。这一基于逻辑的新型框架为研究规则化场景中的LLM提供了理论基础,使得对逻辑推理与越狱攻击等任务的形式化分析成为可能。