Large language models (LLMs) can sometimes detect when they are being evaluated and adjust their behavior to appear more aligned, compromising the reliability of safety evaluations. In this paper, we show that adding a steering vector to an LLM's activations can suppress evaluation-awareness and make the model act like it is deployed during evaluation. To study our steering technique, we train an LLM to exhibit evaluation-aware behavior using a two-step training process designed to mimic how this behavior could emerge naturally. First, we perform continued pretraining on documents with factual descriptions of the model (1) using Python type hints during evaluation but not during deployment and (2) recognizing that the presence of a certain evaluation cue always means that it is being tested. Then, we train the model with expert iteration to use Python type hints in evaluation settings. The resulting model is evaluation-aware: it writes type hints in evaluation contexts more than deployment contexts. We find that activation steering can suppress evaluation awareness and make the model act like it is deployed even when the cue is present. Importantly, we constructed our steering vector using the original model before our additional training. Our results suggest that AI evaluators could improve the reliability of safety evaluations by steering models to act like they are deployed.
翻译:大型语言模型(LLM)有时能够检测到自身正处于评估状态,并调整行为以显得更加对齐,这损害了安全评估的可靠性。本文研究表明,通过向LLM的激活状态添加引导向量,可以抑制其评估感知能力,使模型在评估过程中表现出如同已部署状态的行为。为研究该引导技术,我们通过两步训练流程训练了一个具有评估感知行为的LLM,该流程模拟了此类行为自然产生的过程。首先,我们基于包含模型事实描述的文档进行持续预训练,这些文档包含两种设定:(1)在评估时使用Python类型提示而在部署时不使用;(2)识别特定评估线索的出现始终意味着模型正在被测试。随后,我们通过专家迭代训练模型在评估场景中使用Python类型提示。所得模型具有评估感知能力:其在评估语境中比部署语境中更频繁地编写类型提示。研究发现,激活引导能够抑制评估感知,即使存在评估线索时也能使模型表现出部署状态行为。重要的是,我们使用附加训练前的原始模型构建了引导向量。研究结果表明,人工智能评估者可通过引导模型表现出部署状态行为来提升安全评估的可靠性。