Chain-of-thought (CoT) monitors are LLM-based systems that analyze reasoning traces to detect when outputs may exhibit attributes of interest, such as test-hacking behavior during code generation. In this paper, we use information-theoretic analysis to show that non-zero mutual information between CoT and output is a necessary but not sufficient condition for CoT monitorability. We identify two sources of approximation error that may undermine the performance of CoT monitors in practice: information gap, which measures the extent to which the monitor can extract the information available in CoT, and elicitation error, which measures the extent to which the monitor approximates the optimal monitoring function. We further demonstrate that CoT monitorability can be systematically improved through targeted training objectives. To this end, we propose two complementary approaches: (a) an oracle-based method that directly rewards the monitored model for producing CoTs that maximize monitor accuracy, and (b) a more practical, label-free approach that maximizes conditional mutual information between outputs and CoTs. Across multiple different environments, we show both methods significantly improve monitor accuracy while preventing CoT degeneration even when training against a monitor, thereby mitigating reward hacking when the task reward is imperfectly specified.
翻译:思维链(CoT)监控器是基于大型语言模型的系统,通过分析推理轨迹来检测输出是否可能表现出特定属性,例如代码生成过程中的测试作弊行为。本文通过信息论分析证明,CoT与输出之间的非零互信息是CoT可监控性的必要非充分条件。我们识别了两种可能在实际中削弱CoT监控器性能的近似误差源:信息间隙(衡量监控器从CoT中提取可用信息的程度)和激发误差(衡量监控器逼近最优监控函数的程度)。我们进一步证明,通过针对性训练目标可以系统性地提升CoT可监控性。为此,我们提出两种互补方法:(a)基于预言机的方案,直接奖励被监控模型生成能最大化监控准确率的CoT;(b)更实用的无标注方案,通过最大化输出与CoT之间的条件互信息实现监控优化。在多种不同实验环境中,两种方法均显著提升了监控准确率,同时即使在对抗监控器训练时也能防止CoT退化,从而在任务奖励定义不完善时有效抑制奖励作弊现象。