Fine-tuning large pre-trained models has become the de facto strategy for developing both task-specific and general-purpose machine learning systems, including developing models that are safe to deploy. Despite its clear importance, there has been minimal work that explains how fine-tuning alters the underlying capabilities learned by a model during pretraining: does fine-tuning yield entirely novel capabilities or does it just modulate existing ones? We address this question empirically in synthetic, controlled settings where we can use mechanistic interpretability tools (e.g., network pruning and probing) to understand how the model's underlying capabilities are changing. We perform an extensive analysis of the effects of fine-tuning in these settings, and show that: (i) fine-tuning rarely alters the underlying model capabilities; (ii) a minimal transformation, which we call a 'wrapper', is typically learned on top of the underlying model capabilities, creating the illusion that they have been modified; and (iii) further fine-tuning on a task where such hidden capabilities are relevant leads to sample-efficient 'revival' of the capability, i.e., the model begins reusing these capability after only a few gradient steps. This indicates that practitioners can unintentionally remove a model's safety wrapper merely by fine-tuning it on a, e.g., superficially unrelated, downstream task. We additionally perform analysis on language models trained on the TinyStories dataset to support our claims in a more realistic setup.
翻译:微调大型预训练模型已成为开发任务专用与通用机器学习系统(包括部署安全模型)的事实标准策略。尽管其重要性显而易见,但极少有研究阐释微调如何改变模型在预训练期间习得的底层能力:微调是产生全新能力,还是仅对现有能力进行调节?我们在合成可控环境中通过实证方法探讨该问题,利用机制可解释性工具(如网络剪枝与探针技术)解析模型底层能力的变化机制。在此类设定下开展系统性微调效应分析,结果表明:(i)微调极少改变模型的底层能力;(ii)模型通常在底层能力之上学习到一种最小化转换(我们称之为“封装器”),造成能力已被修改的假象;(iii)在相关隐藏能力适用的任务上继续微调,可引发能力的样本高效“复苏”——即模型仅需少量梯度步数即可重新调用这些能力。这表明实践者可能仅通过对表面无关的下游任务进行微调,即可无意间移除模型的安全封装器。我们进一步在TinyStories数据集训练的语言模型上开展分析,以在更现实的设定中验证上述结论。