Fine-tuning, a foundational method for adapting large language models, has long been considered ineffective for model editing. Here, we challenge this belief, arguing that the reported failure arises not from the inherent limitation of fine-tuning itself, but from adapting it to the sequential nature of the editing task, a single-pass depth-first pipeline that optimizes each sample to convergence before moving on. While intuitive, this depth-first pipeline coupled with sample-wise updating over-optimizes each edit and induces interference across edits. Our controlled experiments reveal that simply restoring fine-tuning to the standard breadth-first (i.e., epoch-based) pipeline with mini-batch optimization substantially improves its effectiveness for model editing. Moreover, fine-tuning in editing also suffers from suboptimal tuning parameter locations inherited from prior methods. Through systematic analysis of tuning locations, we derive LocFT-BF, a simple and effective localized editing method built on the restored fine-tuning framework. Extensive experiments across diverse LLMs and datasets demonstrate that LocFT-BF outperforms state-of-the-art methods by large margins. Notably, to our knowledge, it is the first to sustain 100K edits and 72B-parameter models,10 x beyond prior practice, without sacrificing general capabilities. By clarifying a long-standing misconception and introducing a principled localized tuning strategy, we advance fine-tuning from an underestimated baseline to a leading method for model editing, establishing a solid foundation for future research.
翻译:微调作为适应大型语言模型的基础方法,长期以来被认为在模型编辑中效果不佳。本文挑战了这一观点,指出所报道的失败并非源于微调本身的内在局限,而是由于将其适配于编辑任务的顺序特性——一种单次深度优先的处理流程,即在处理下一个样本前将当前样本优化至收敛。尽管这种深度优先流程结合逐样本更新的方式直观易懂,但它会导致每个编辑被过度优化,并引发编辑间的相互干扰。我们的对照实验表明,仅需将微调恢复为标准广度优先(即基于轮次)的流程并配合小批量优化,即可显著提升其在模型编辑中的有效性。此外,编辑中的微调还受限于从前沿方法继承的次优调参位置。通过对调参位置进行系统分析,我们提出了LocFT-BF——一种基于恢复后微调框架的简单高效局部化编辑方法。跨多种大型语言模型和数据集的广泛实验表明,LocFT-BF以显著优势超越现有最优方法。值得注意的是,据我们所知,该方法首次实现了10万次编辑和720亿参数模型的持续处理(规模达到先前实践的10倍),且未损害模型的通用能力。通过澄清长期存在的误解并引入原则性的局部调优策略,我们将微调从一个被低估的基线方法提升为模型编辑的领先技术,为未来研究奠定了坚实基础。