Safety evaluation for advanced AI systems implicitly assumes that behavior observed under evaluation predicts behavior in deployment. This assumption becomes fragile for agents with situational awareness, which may exploit regime leakage, that is, cues distinguishing evaluation from deployment, to implement conditional policies that comply under oversight while defecting in deployment-like regimes. We reframe alignment evaluation as a problem of information flow under partial observability and show that divergence between evaluation-time and deployment-time behavior is bounded by the amount of regime information extractable from decision-relevant internal representations. Motivated by this result, we study regime-blind mechanisms, training-time interventions that reduce access to regime cues through adversarial invariance constraints, without assuming information-theoretic erasure. We evaluate this approach on an open-weight language model across controlled failure modes including scientific sycophancy, temporal sleeper agents, and data leakage. Regime-blind training suppresses regime-conditioned failures without measurable loss of task utility, but exhibits heterogeneous dynamics. Sycophancy shows a sharp representational and behavioral transition at low intervention strength, while sleeper-agent behavior requires substantially stronger pressure and does not yield a clean collapse of regime decodability at the audited bottleneck. These results show that representational invariance is a meaningful but fundamentally limited control lever. It can reduce the feasibility of regime-conditioned strategies by shifting representational costs, but cannot guarantee their elimination. We therefore argue that behavioral evaluation should be complemented with white-box diagnostics of regime awareness and internal information flow.
翻译:先进人工智能系统的安全性评估隐含地假设评估中观察到的行为能够预测部署中的行为。对于具有情境感知能力的智能体,这一假设变得脆弱,因为它们可能利用机制泄漏——即区分评估与部署的线索——来实施条件策略,在监督下遵守规则,而在类部署机制中违规。我们将对齐评估重新定义为部分可观测性下的信息流问题,并证明评估时行为与部署时行为之间的差异受限于从决策相关内部表征中可提取的机制信息量。受此结果启发,我们研究机制盲化机制,即通过对抗性不变性约束减少对机制线索访问的训练时干预,而不假设信息论层面的擦除。我们在一个开源权重语言模型上评估该方法,涵盖受控故障模式,包括科学迎合、时序休眠代理和数据泄漏。机制盲化训练抑制了机制条件化故障,且未造成可测量的任务效用损失,但表现出异质性动态。迎合行为在低干预强度下表现出急剧的表征与行为转变,而休眠代理行为需要显著更强的干预压力,且在受审计的瓶颈处未呈现机制可解码性的清晰坍缩。这些结果表明,表征不变性是一种有意义但本质上有限的控制手段。它可以通过改变表征成本来降低机制条件化策略的可行性,但无法保证其完全消除。因此,我们认为行为评估应辅以对机制感知和内部信息流的白盒诊断。