Recent work on permutation-based model merging has shown impressive low- or zero-barrier mode connectivity between models from completely different initializations. However, this line of work has not yet extended to the Transformer architecture, despite its dominant popularity in the language domain. Therefore, in this work, we investigate the extent to which separate Transformer minima learn similar features, and propose a model merging technique to investigate the relationship between these minima in the loss landscape. The specifics of the architecture, like its residual connections, multi-headed attention, and discrete, sequential input, require specific interventions in order to compute model permutations that remain within the same functional equivalence class. In merging these models with our method, we consistently find lower loss barriers between minima compared to model averaging, across models trained on a masked-language modeling task or fine-tuned on a language understanding benchmark. Our results show that the minima of these models are less sharp and isolated than previously understood, and provide a basis for future work on merging separately trained Transformer models.
翻译:最近关于基于置换的模型融合的研究表明,来自完全不同的初始化的模型之间存在令人印象深刻的低或零障碍模式连接性。然而,这一系列工作尚未扩展到Transformer架构,尽管其在语言领域占据主导地位。因此,在本工作中,我们研究了独立的Transformer极小值学习相似特征的程度,并提出了一种模型融合技术来研究损失景观中这些极小值之间的关系。该架构的具体细节,如其残差连接、多头注意力机制以及离散的序列输入,需要特定的干预措施,以便计算仍属于同一功能等价类的模型置换。通过使用我们的方法融合这些模型,与模型平均相比,我们在经过掩码语言建模任务训练或在语言理解基准上微调的模型中,始终发现极小值之间的损失障碍更低。我们的结果表明,这些模型的极小值比之前理解的更不尖锐和孤立,并为未来融合单独训练的Transformer模型的工作提供了基础。