Model merging enables multiple large language models (LLMs) to be combined into a single model while preserving performance. This makes it a valuable tool in LLM development, offering a competitive alternative to multi-task training. However, merging can be difficult at scale, as successful merging requires choosing the right merge operator, selecting the right models, and merging them in the right order. This often leads researchers to run expensive merge-and-evaluate searches to select the best merge. In this work, we provide an alternative by introducing \simmerge{}, \emph{a predictive merge-selection method} that selects the best merge using inexpensive, task-agnostic similarity signals between models. From a small set of unlabeled probes, we compute functional and structural features and use them to predict the performance of a given 2-way merge. Using these predictions, \simmerge{} selects the best merge operator, the subset of models to merge, and the merge order, eliminating the expensive merge-and-evaluate loop. We demonstrate that we surpass standard merge-operator performance on 2-way merges of 7B-parameter LLMs, and that \simmerge{} generalizes to multi-way merges and 111B-parameter LLM merges without retraining. Additionally, we present a bandit variant that supports adding new tasks, models, and operators on the fly. Our results suggest that learning how to merge is a practical route to scalable model composition when checkpoint catalogs are large and evaluation budgets are tight.
翻译:模型合并能够将多个大语言模型(LLM)组合为单一模型,同时保持其性能。这使其成为LLM开发中的一项重要工具,为多任务训练提供了一种具有竞争力的替代方案。然而,大规模合并面临挑战,因为成功的合并需要选择合适的合并算子、筛选恰当的模型并确定正确的合并顺序。这通常导致研究人员运行昂贵的“合并-评估”搜索以选择最佳合并方案。在本工作中,我们提出了一种替代方法:引入\simmerge{}——一种预测性合并选择方法,该方法利用模型间廉价、任务无关的相似性信号来选择最佳合并方案。我们从少量无标注探针中计算功能与结构特征,并利用这些特征预测给定双向合并的性能。基于这些预测,\simmerge{}能够选择最佳合并算子、待合并模型子集以及合并顺序,从而消除昂贵的“合并-评估”循环。我们证明,在7B参数LLM的双向合并任务中,该方法超越了标准合并算子的性能,并且\simmerge{}能够泛化至多路合并及111B参数LLM的合并任务而无需重新训练。此外,我们提出了一种支持动态添加新任务、模型和算子的赌博机变体。我们的结果表明,当检查点目录规模庞大且评估预算有限时,学习如何合并是实现可扩展模型组合的实用途径。