Reasoning-focused LLMs sometimes alter their behavior when they detect that they are being evaluated, which can lead them to optimize for test-passing performance or to comply more readily with harmful prompts if real-world consequences appear absent. We present the first quantitative study of how such "test awareness" impacts model behavior, particularly its performance on safety-related tasks. We introduce a white-box probing framework that (i) linearly identifies awareness-related activations and (ii) steers models toward or away from test awareness while monitoring downstream performance. We apply our method to different state-of-the-art open-weight reasoning LLMs across both realistic and hypothetical tasks (denoting tests or simulations). Our results demonstrate that test awareness significantly impacts safety alignment (such as compliance with harmful requests and conforming to stereotypes) with effects varying in both magnitude and direction across models. By providing control over this latent effect, our work aims to provide a stress-test mechanism and increase trust in how we perform safety evaluations.
翻译:专注于推理的大语言模型在检测到自身正被评估时,有时会改变其行为,这可能导致它们为通过测试而优化性能,或在现实后果看似缺失时更易遵从有害提示。我们首次定量研究了此类“测试意识”如何影响模型行为,特别是在安全相关任务上的表现。我们提出了一种白盒探测框架,该框架能够(i)线性识别与意识相关的激活,并(ii)引导模型增强或减弱测试意识,同时监控下游性能。我们将该方法应用于多个先进的开放权重推理大语言模型,涵盖现实与假设性任务(分别代表测试或模拟场景)。结果表明,测试意识显著影响安全对齐(如对有害请求的遵从度及对刻板印象的顺应性),其影响程度和方向因模型而异。通过提供对此潜在效应的控制,我们的研究旨在构建一种压力测试机制,并增强对安全评估方式的信任度。