Model merging aims to combine multiple task-specific expert models into a single model while preserving generalization across diverse tasks. However, interference among experts, especially when they are trained on different objectives, often leads to significant performance degradation. Despite recent progress, resolving this interference without data access, retraining, or architectural modification remains a fundamental challenge. This paper provides a theoretical analysis demonstrating that the input covariance of each task, which is a key factor for optimal merging, can be implicitly estimated from the parameter differences of its fine-tuned model, even in a fully data-free setting. Building on this insight, we introduce \acem, an Adaptive Covariance Estimation framework that effectively mitigates inter-task interference. Our approach features a principled, closed-form solution that contrasts with prior iterative or heuristic methods. Extensive experiments on both vision and language benchmarks demonstrate that \acem sets a new state-of-the-art among data-free methods. It consistently outperforms existing baselines; for example, \acem achieves an average absolute improvement of 4\% over the previous methods across seven tasks on GPT-2. Owing to its efficient closed-form formulation, \acem delivers superior performance with a modest computational cost, providing a practical and theoretically grounded solution for model merging.
翻译:模型融合旨在将多个任务特定的专家模型合并为单一模型,同时保持其在多样化任务上的泛化能力。然而,专家模型间的干扰——尤其是当它们针对不同目标进行训练时——常导致显著的性能下降。尽管近期研究取得进展,但在无需数据访问、重新训练或架构修改的前提下解决此类干扰,仍是一个根本性挑战。本文通过理论分析证明,即使在全无数据环境下,各任务输入协方差(作为实现最优融合的关键因素)仍可从其微调模型的参数差异中隐式估计。基于这一洞见,我们提出\acem——一种自适应协方差估计框架,能有效缓解任务间干扰。本方法采用具有理论依据的闭式解,与先前依赖迭代或启发式策略的方法形成鲜明对比。在视觉与语言基准测试上的大量实验表明,\acem在无数据方法中确立了新的性能标杆。其表现持续超越现有基线;例如,在GPT-2的七项任务中,\acem相较以往方法平均绝对提升达4%。得益于高效的闭式表达,\acem能以适中的计算成本实现卓越性能,为模型融合提供了兼具实用性与理论依据的解决方案。