Activation-based steering enables Large Language Models (LLMs) to exhibit targeted behaviors by intervening on intermediate activations without retraining. Despite its widespread use, the mechanistic factors that govern when steering succeeds or fails remain poorly understood, as prior work has relied primarily on black-box outputs or LLM-based judges. In this study, we investigate whether the reliability of steering can be diagnosed using internal model signals. We focus on two information-theoretic measures: the entropy-derived Normalized Branching Factor (NBF), and the Kullback-Leibler (KL) divergence between steered activations and targeted concepts in the vocabulary space. We hypothesize that effective steering corresponds to structured entropy preservation and coherent KL alignment across decoding steps. Building on a reliability study demonstrating high inter-judge agreement between two architecturally distinct LLMs, we use LLM-generated annotations as ground truth and show that these mechanistic signals provide meaningful predictive power for identifying successful steering and estimating failure probability. We further introduce a stronger evaluation baseline for Contrastive Activation Addition (CAA) and Sparse Autoencoder-based steering, the two most widely adopted activation-steering methods.
翻译:基于激活的操控方法通过干预大型语言模型(LLMs)的中间激活状态来实现目标行为,无需重新训练模型。尽管该方法已得到广泛应用,但决定操控成功或失败的机制性因素仍不甚明晰,因为先前研究主要依赖于黑盒输出或基于LLM的评估器。本研究探讨是否可以利用模型内部信号来诊断操控的可靠性。我们重点关注两个信息论度量指标:基于熵的归一化分支因子(NBF),以及词汇空间中受操控激活与目标概念之间的Kullback-Leibler(KL)散度。我们假设有效的操控对应于解码步骤中结构化的熵保持和连贯的KL对齐。基于一项可靠性研究(该研究证明两种架构不同的LLM评估者间具有高度一致性),我们采用LLM生成的标注作为真实基准,并证明这些机制性信号能为识别成功操控和估计失败概率提供有意义的预测能力。此外,我们为两种最广泛采用的激活操控方法——对比激活加法(CAA)和基于稀疏自编码器的操控——引入了更强大的评估基线。