Identifying inputs that trigger specific behaviours or latent features in language models could have a wide range of safety use cases. We investigate a class of methods capable of generating targeted, linguistically fluent inputs that activate specific latent features or elicit model behaviours. We formalise this approach as context modification and present ContextBench -- a benchmark with tasks assessing core method capabilities and potential safety applications. Our evaluation framework measures both elicitation strength (activation of latent features or behaviours) and linguistic fluency, highlighting how current state-of-the-art methods struggle to balance these objectives. We enhance Evolutionary Prompt Optimisation (EPO) with LLM-assistance and diffusion model inpainting, and demonstrate that these variants achieve state-of-the-art performance in balancing elicitation effectiveness and fluency.
翻译:识别能够触发语言模型中特定行为或潜在特征的输入,在安全领域具有广泛的应用前景。本文研究一类能够生成针对性、语言流畅的输入以激活特定潜在特征或诱发模型行为的方法。我们将该方法形式化为上下文修改,并提出ContextBench——一个包含评估核心方法能力及潜在安全应用任务的基准测试集。我们的评估框架同时衡量诱发强度(潜在特征或行为的激活程度)和语言流畅性,揭示了当前最先进方法在平衡这两个目标时面临的困境。我们通过大语言模型辅助和扩散模型修复技术增强进化提示优化(EPO)方法,并证明这些改进方案在平衡诱发效果与流畅性方面达到了最先进的性能水平。