Sparse autoencoders (SAEs) decompose language model activations into interpretable features, but existing methods reveal only which features activate, not which change model outputs when amplified. We introduce Control Reinforcement Learning (CRL), which trains a policy to select SAE features for steering at each token, producing interpretable intervention logs: the learned policy identifies features that change model outputs when amplified. Adaptive Feature Masking encourages diverse feature discovery while preserving singlefeature interpretability. The framework yields new analysis capabilities: branch point tracking locates tokens where feature choice determines output correctness; critic trajectory analysis separates policy limitations from value estimation errors; layer-wise comparison reveals syntactic features in early layers and semantic features in later layers. On Gemma-2 2B across MMLU, BBQ, GSM8K, HarmBench, and XSTest, CRL achieves improvements while providing per-token intervention logs. These results establish learned feature steering as a mechanistic interpretability tool that complements static feature analysis with dynamic intervention probes
翻译:稀疏自编码器(SAE)将语言模型激活分解为可解释特征,但现有方法仅揭示哪些特征被激活,而未揭示哪些特征在放大时会改变模型输出。我们提出控制强化学习(CRL),该方法训练一个策略以在每个令牌处选择SAE特征进行导向,并生成可解释的干预日志:学习到的策略识别出在放大时会改变模型输出的特征。自适应特征掩码在保持单特征可解释性的同时鼓励多样化特征发现。该框架提供了新的分析能力:分支点追踪定位特征选择决定输出正确性的令牌;评论家轨迹分析将策略限制与价值估计误差分离;分层比较揭示早期层中的句法特征和后期层中的语义特征。在Gemma-2 2B模型上,跨MMLU、BBQ、GSM8K、HarmBench和XSTest基准测试,CRL在提供每令牌干预日志的同时实现了性能提升。这些结果确立了学习特征导向作为一种机制可解释性工具,通过动态干预探针补充了静态特征分析。