In this paper, we investigate the degree to which fine-tuning in Large Language Models (LLMs) effectively mitigates versus merely conceals undesirable behavior. Through the lens of semi-realistic role-playing exercises designed to elicit such behaviors, we explore the response dynamics of LLMs post fine-tuning interventions. Our methodology involves prompting models for Chain-of-Thought (CoT) reasoning and analyzing the coherence between the reasoning traces and the resultant outputs. Notably, we identify a pervasive phenomenon we term \emph{reason-based deception}, where models either stop producing reasoning traces or produce seemingly ethical reasoning traces that belie the unethical nature of their final outputs. We further examine the efficacy of response strategies (polite refusal versus explicit rebuttal) in curbing the occurrence of undesired behavior in subsequent outputs of multi-turn interactions. Our findings reveal that explicit rebuttals significantly outperform polite refusals in preventing the continuation of undesired outputs and nearly eliminate reason-based deception, challenging current practices in model fine-tuning. Accordingly, the two key contributions of this paper are (1) defining and studying reason-based deception, a new type of hidden behavior, and (2) demonstrating that rebuttals provide a more robust response model to harmful requests than refusals, thereby highlighting the need to reconsider the response strategies in fine-tuning approaches.
翻译:本文研究了大型语言模型(LLM)微调在多大程度上有效抑制而非仅仅掩盖不良行为。通过设计半现实角色扮演任务以诱发此类行为,我们探究了微调干预后LLM的响应动态。我们的方法包括提示模型进行思维链(CoT)推理,并分析推理轨迹与最终输出之间的一致性。值得注意的是,我们发现了一种普遍现象,我们称之为\emph{基于推理的欺骗}:模型要么停止生成推理轨迹,要么生成看似符合伦理的推理轨迹,却掩盖了其最终输出的非伦理本质。我们进一步检验了响应策略(礼貌拒绝与明确反驳)在抑制多轮交互后续输出中不良行为发生的有效性。研究结果表明,明确反驳在防止不良输出持续出现方面显著优于礼貌拒绝,并几乎消除了基于推理的欺骗现象,这对当前模型微调实践提出了挑战。因此,本文的两个关键贡献在于:(1)定义并研究了一种新型隐蔽行为——基于推理的欺骗;(2)证明反驳比拒绝能为有害请求提供更稳健的响应模型,从而强调需要重新思考微调方法中的响应策略。