Combining the predictions of multiple trained models through ensembling is generally a good way to improve accuracy by leveraging the different learned features of the models, however it comes with high computational and storage costs. Model fusion, the act of merging multiple models into one by combining their parameters reduces these costs but doesn't work as well in practice. Indeed, neural network loss landscapes are high-dimensional and non-convex and the minima found through learning are typically separated by high loss barriers. Numerous recent works have been focused on finding permutations matching one network features to the features of a second one, lowering the loss barrier on the linear path between them in parameter space. However, permutations are restrictive since they assume a one-to-one mapping between the different models' neurons exists. We propose a new model merging algorithm, CCA Merge, which is based on Canonical Correlation Analysis and aims to maximize the correlations between linear combinations of the model features. We show that our alignment method leads to better performances than past methods when averaging models trained on the same, or differing data splits. We also extend this analysis into the harder setting where more than 2 models are merged, and we find that CCA Merge works significantly better than past methods. Our code is publicly available at https://github.com/shoroi/align-n-merge
翻译:通过集成方法结合多个训练模型的预测结果,通常能有效利用模型学习到的不同特征来提升精度,但这种方法会带来高昂的计算与存储成本。模型融合通过合并参数将多个模型整合为单一模型,虽然能降低这些成本,但在实际应用中效果欠佳。事实上,神经网络的损失景观具有高维非凸特性,且通过学习找到的极小值通常被高损失壁垒所分隔。近期大量研究致力于寻找能匹配两个网络特征的排列方式,以降低参数空间中线性路径上的损失壁垒。然而,排列方法具有局限性,因其假设不同模型的神经元间存在一一映射关系。本文提出一种基于典型相关分析的新型模型融合算法——CCA Merge,其核心目标是最大化模型特征线性组合之间的相关性。实验表明,在融合相同或不同数据划分训练的模型时,我们的对齐方法相比以往方法具有更优的性能。我们还将该分析拓展至更具挑战性的多模型(超过两个)融合场景,发现CCA Merge算法的表现显著优于以往方法。相关代码已公开于 https://github.com/shoroi/align-n-merge。