Model merging aims to efficiently combine the weights of multiple expert models, each trained on a specific task, into a single multi-task model, with strong performance across all tasks. When applied to all but the last layer of weights, existing methods -- such as Task Arithmetic, TIES-merging, and TALL mask merging -- work well to combine expert models obtained by fine-tuning a common foundation model, operating within a "local" neighborhood of the foundation model. This work explores the more challenging scenario of "non-local" merging, which we find arises when an expert model changes significantly during pretraining or where the expert models do not even share a common foundation model. We observe that standard merging techniques often fail to generalize effectively in this non-local setting, even when accounting for permutation symmetries using standard techniques. We identify that this failure is, in part, due to "variance collapse", a phenomenon identified also in the setting of linear mode connectivity by Jordan et al. (2023). To address this, we propose a multi-task technique to re-scale and shift the output activations of the merged model for each task, aligning its output statistics with those of the corresponding task-specific expert models. Our experiments demonstrate that this correction significantly improves the performance of various model merging approaches in non-local settings, providing a strong baseline for future research on this problem.
翻译:模型融合旨在高效地将多个专家模型的权重(每个模型针对特定任务训练)合并为一个多任务模型,使其在所有任务上均表现出色。当应用于除最后一层外的所有权重时,现有方法(如任务算术、TIES融合和TALL掩码融合)能有效融合通过微调同一基础模型获得的专家模型,这些操作均在基础模型的“局部”邻域内进行。本研究探讨更具挑战性的“非局部”融合场景,该场景出现在专家模型在预训练阶段发生显著变化,或专家模型甚至不共享同一基础模型的情况下。我们观察到,即使采用标准技术考虑置换对称性,传统融合方法在此非局部设定中仍常无法有效泛化。我们发现这种失败部分归因于“方差坍缩”——该现象亦由Jordan等人(2023)在线性模式连通性研究中指出。为解决此问题,我们提出一种多任务技术,通过重新缩放和平移融合模型针对各任务的输出激活值,使其输出统计量与对应任务专用专家模型的统计量对齐。实验表明,该修正方法显著提升了多种模型融合方法在非局部场景下的性能,为此问题的未来研究提供了坚实基础。