While existing evaluations of large language models (LLMs) measure deception rates, the underlying conditions that give rise to deceptive behavior are poorly understood. We investigate this question using a novel dataset of realistic moral trade-offs where honesty incurs variable costs. Contrary to humans, who tend to become less honest given time to deliberate (Capraro, 2017; Capraro et al., 2019), we find that reasoning consistently increases honesty across scales and for several LLM families. This effect is not only a function of the reasoning content, as reasoning traces are often poor predictors of final behaviors. Rather, we show that the underlying geometry of the representational space itself contributes to the effect. Namely, we observe that deceptive regions within this space are metastable: deceptive answers are more easily destabilized by input paraphrasing, output resampling, and activation noise than honest ones. We interpret the effect of reasoning in this vein: generating deliberative tokens as part of moral reasoning entails the traversal of a biased representational space, ultimately nudging the model toward its more stable, honest defaults.
翻译:尽管现有的大语言模型(LLM)评估方法能够测量欺骗率,但对导致欺骗行为的潜在条件仍缺乏深入理解。本研究利用一个新颖的现实道德权衡数据集来探讨这一问题,其中诚实行为会带来不同程度的代价。与人类在拥有深思熟虑时间后往往变得更不诚实(Capraro, 2017; Capraro et al., 2019)的现象相反,我们发现推理过程在不同模型规模及多个LLM家族中均能持续提升诚实度。这种效应并非仅由推理内容决定,因为推理轨迹通常难以预测最终行为。相反,我们证明了表征空间本身的底层几何结构对此效应具有贡献。具体而言,我们观察到该空间中的欺骗性区域具有亚稳定性:相较于诚实回答,欺骗性答案更容易因输入转述、输出重采样及激活噪声而失稳。我们据此解释推理的作用:在道德推理过程中生成审慎标记意味着在带有偏见的表征空间中穿行,最终将模型推向其更稳定的诚实默认状态。