Instruction Fine-tuning~(IFT) is a critical phase in building large language models~(LLMs). Previous works mainly focus on the IFT's role in the transfer of behavioral norms and the learning of additional world knowledge. However, the understanding of the underlying mechanisms of IFT remains significantly limited. In this paper, we design a knowledge intervention framework to decouple the potential underlying factors of IFT, thereby enabling individual analysis of different factors. Surprisingly, our experiments reveal that attempting to learn additional world knowledge through IFT often struggles to yield positive impacts and can even lead to markedly negative effects. Further, we discover that maintaining internal knowledge consistency before and after IFT is a critical factor for achieving successful IFT. Our findings reveal the underlying mechanisms of IFT and provide robust support for some very recent and potential future works.
翻译:指令微调(IFT)是构建大型语言模型(LLM)的关键阶段。先前的研究主要关注IFT在行为规范迁移和额外世界知识学习方面的作用。然而,对IFT底层机制的理解仍然非常有限。在本文中,我们设计了一个知识干预框架来解耦IFT潜在的底层因素,从而能够对不同因素进行单独分析。令人惊讶的是,我们的实验表明,试图通过IFT学习额外世界知识通常难以产生积极影响,甚至可能导致显著的负面效果。此外,我们发现,保持IFT前后内部知识的一致性是实现成功IFT的关键因素。我们的研究结果揭示了IFT的底层机制,并为一些最新及未来潜在的工作提供了有力支持。