Fine-tuning pre-trained models provides significant advantages in downstream performance. The ubiquitous nature of pre-trained models such as BERT and its derivatives in natural language processing has also led to a proliferation of task-specific fine-tuned models. As these models typically only perform one task well, additional training or ensembling is required in multi-task scenarios. The growing field of model merging provides a solution, dealing with the challenge of combining multiple task-specific models into a single multi-task model. In this study, we introduce a novel model merging method for Transformers, combining insights from previous work in Fisher-weighted averaging and the use of Fisher information in model pruning. Utilizing the Fisher information of mask nodes within the Transformer architecture, we devise a computationally efficient weighted-averaging scheme. Our method exhibits a regular and significant performance increase across various models in the BERT family, outperforming full-scale Fisher-weighted averaging in a fraction of the computational cost, with baseline performance improvements of up to +6.5 and a speedup of 57.4x in the biggest model. Our results prove the potential of our method in current multi-task learning environments and suggest its scalability and adaptability to new model architectures and learning scenarios.
翻译:对预训练模型进行微调可在下游任务性能上提供显著优势。BERT及其衍生模型在自然语言处理中的广泛普及,也导致了大量面向特定任务的微调模型涌现。由于这类模型通常仅擅长单一任务,在多任务场景中需要额外训练或集成学习。日益发展的模型融合领域为此提供了解决方案,致力于将多个任务专用模型整合为单一多任务模型。本研究针对Transformer架构提出了一种新型模型融合方法,融合了Fisher加权平均与Fisher信息在模型剪枝中的研究成果。通过利用Transformer架构中掩码节点的Fisher信息,我们设计了一种计算高效的加权平均方案。该方法在不同BERT系列模型上展现出稳定且显著的性能提升,在计算成本仅为完整Fisher加权平均极小部分的情况下,实现基线性能最高提升6.5个百分点,并在最大规模模型上获得57.4倍加速。实验结果证明了该方法在现行多任务学习环境中的潜力,并表明其对新模型架构与学习场景的可扩展性与适应性。