Given the impressive capabilities of recent Large Language Models (LLMs), we investigate and benchmark the most popular proprietary and different sized open source models on the task of explicit instruction following in conflicting situations, e.g. overrides. These include the ability of the model to override the knowledge within the weights of the model, the ability to override (or moderate) extracted knowledge in the prompt, and lastly the ability to perform a full jailbreak. Experimentation performed suggest several key findings to improve instruction following - larger models perform the best in following instructions that override internal and contextual instructions, and are obedient, even to a fault. When scaling to longer contexts via rope scaling, a significant buffer needs to be maintained from the edge of the perplexity cliff in order to maintain instruction following capabilities. Finally, we observe improving instruction following, and subsequently instruction overrides/jailbreaks, is fundamentally at odds with the ability of a language model to follow given safety filters or guidelines. Thus, we postulate the most effective approach for safe, trustworthy AI should be dealt external to the LLM itself.
翻译:鉴于近期大型语言模型(LLM)的卓越能力,我们针对最常见的专有模型及不同规模的开源模型,在冲突情境(如覆盖指令)下的显式指令遵循能力进行了调查与基准测试。这些能力包括模型覆盖其权重内知识的程度、覆盖(或调节)提示中提取知识的能力,以及实现完全越狱的能力。实验结果表明,提升指令遵循能力的关键发现包括:较大规模的模型在执行覆盖内部及上下文指令的任务中表现最佳,且其顺从性极高,甚至达到盲从的程度。通过绳索缩放(rope scaling)扩展上下文长度时,需在困惑度悬崖边缘维持显著缓冲区间以保持指令遵循能力。最后,我们观察到提升指令遵循能力(进而增强指令覆盖/越狱能力)与语言模型遵循给定安全过滤器或指南的能力存在根本性矛盾。因此,我们提出实现安全可信赖AI最有效的方法应独立于LLM本身进行外部处理。