Large language models (LLMs) with one or more fine-tuning phases have become a necessary step to unlock various capabilities, enabling LLMs to follow natural language instructions or align with human preferences. However, it carries the risk of catastrophic forgetting during sequential training, the parametric knowledge or the ability learned in previous stages may be overwhelmed by incoming training data. In this paper, we find that by regularly resetting partial parameters, LLMs can restore some of the original knowledge. Inspired by this, we introduce Half Fine-Tuning (HFT) for LLMs, as a substitute for full fine-tuning (FFT), to mitigate the forgetting issues, where half of the parameters are selected to learn new tasks while the other half are frozen to remain previous knowledge. We provide a feasibility analysis from the perspective of optimization and interpret the parameter selection operation as a regularization term. Without changing the model architecture, HFT could be seamlessly integrated into existing fine-tuning frameworks. Extensive experiments and analysis on supervised fine-tuning, direct preference optimization, and continual learning consistently demonstrate the effectiveness, robustness, and efficiency of HFT. Compared with FFT, HFT not only significantly alleviates the forgetting problem, but also achieves the best performance in a series of downstream benchmarks, with an approximately 30% reduction in training time.
翻译:大型语言模型在经过一轮或多轮微调后,已成为解锁多种能力的必要步骤,使其能够遵循自然语言指令或与人类偏好对齐。然而,在顺序训练过程中存在灾难性遗忘的风险,即前一阶段学习到的参数化知识或能力可能被后续训练数据所覆盖。本文发现,通过定期重置部分参数,大型语言模型能够恢复部分原始知识。受此启发,我们提出了针对大型语言模型的“一半微调”(HFT)方法,作为全微调(FFT)的替代方案,以缓解遗忘问题。该方法中,一半参数被选择用于学习新任务,而另一半则被冻结以保留先前知识。我们从优化角度进行了可行性分析,并将参数选择操作解释为正则化项。在不改变模型架构的前提下,HFT可无缝集成到现有微调框架中。在监督微调、直接偏好优化和持续学习任务上的大量实验与分析一致证明了HFT的有效性、鲁棒性和高效性。与FFT相比,HFT不仅显著缓解了遗忘问题,还在系列下游基准测试中取得了最佳性能,同时训练时间减少了约30%。