Deep model fusion is an emerging technique that unifies the predictions or parameters of several deep neural networks into a single model in a cost-effective and data-efficient manner. This enables the unified model to take advantage of the original models' strengths, potentially exceeding their performance. Although a variety of deep model fusion techniques have been introduced, their evaluations tend to be inconsistent and often inadequate to validate their effectiveness and robustness against distribution shifts. To address this issue, we introduce FusionBench, which is the first comprehensive benchmark dedicated to deep model fusion. FusionBench covers a wide range of tasks, including open-vocabulary image classification, text classification, and text-to-text generation. Each category includes up to eight tasks with corresponding task-specific models, featuring both full fine-tuning and LoRA fine-tuning, as well as models of different sizes, to ensure fair and balanced comparisons of various multi-task model fusion techniques across different tasks, model scales, and fine-tuning strategies. We implement and evaluate a broad spectrum of deep model fusion techniques. These techniques range from model ensemble methods, which combine the predictions to improve the overall performance, to model merging, which integrates different models into a single one, and model mixing methods, which upscale or recombine the components of the original models. FusionBench now contains 26 distinct tasks, 74 fine-tuned models, and 16 fusion techniques, and we are committed to consistently expanding the benchmark with more tasks, models, and fusion techniques. In addition, we offer a well-documented set of resources and guidelines to aid researchers in understanding and replicating the benchmark results. Homepage https://github.com/tanganke/fusion_bench
翻译:深度模型融合是一种新兴技术,它能够以经济高效和数据高效的方式将多个深度神经网络的预测或参数统一为单一模型。这使得统一模型能够利用原始模型的优势,并可能超越其性能。尽管已经引入了多种深度模型融合技术,但它们的评估往往不一致,且通常不足以验证其有效性及对分布偏移的鲁棒性。为解决这一问题,我们提出了FusionBench,这是首个专用于深度模型融合的综合基准测试。FusionBench涵盖了广泛的任务,包括开放词汇图像分类、文本分类和文本到文本生成。每个类别包含多达八个任务及相应的任务特定模型,同时包含完全微调和LoRA微调,以及不同规模的模型,以确保在不同任务、模型规模和微调策略下对各种多任务模型融合技术进行公平且均衡的比较。我们实现并评估了广泛的深度模型融合技术。这些技术范围涵盖模型集成方法(通过组合预测以提高整体性能)、模型合并(将不同模型整合为单一模型)以及模型混合方法(对原始模型的组件进行上采样或重组)。FusionBench目前包含26个独立任务、74个微调模型和16种融合技术,我们致力于持续扩展基准测试,纳入更多任务、模型和融合技术。此外,我们提供了一套文档完善的资源和指南,以帮助研究人员理解和复现基准测试结果。主页 https://github.com/tanganke/fusion_bench