The rapid expansion of the open-source language model landscape presents an opportunity to merge the competencies of these model checkpoints by combining their parameters. Advances in transfer learning, the process of fine-tuning pre-trained models for specific tasks, has resulted in the development of vast amounts of task-specific models, typically specialized in individual tasks and unable to utilize each other's strengths. Model merging facilitates the creation of multitask models without the need for additional training, offering a promising avenue for enhancing model performance and versatility. By preserving the intrinsic capabilities of the original models, model merging addresses complex challenges in AI - including the difficulties of catastrophic forgetting and multi-task learning. To support this expanding area of research, we introduce MergeKit, a comprehensive, open-source library designed to facilitate the application of model merging strategies. MergeKit offers an extensible framework to efficiently merge models on any hardware, providing utility to researchers and practitioners. To date, thousands of models have been merged by the open-source community, leading to the creation of some of the worlds most powerful open-source model checkpoints, as assessed by the Open LLM Leaderboard. The library is accessible at https://github.com/arcee-ai/MergeKit.
翻译:开源语言模型领域的快速扩张为通过组合这些模型检查点的参数来合并其功能提供了机遇。迁移学习(即针对特定任务对预训练模型进行微调的过程)的进步促使大量任务特定模型被开发出来,但这些模型通常专注于单一任务,无法利用彼此的优势。模型合并无需额外训练即可创建多任务模型,为提升模型性能与多功能性开辟了有前景的途径。通过保留原始模型的内在能力,模型合并解决了人工智能中的复杂挑战——包括灾难性遗忘和多任务学习的难题。为支持这一快速发展的研究领域,我们推出了MergeKit,这是一个综合性开源库,旨在促进模型合并策略的应用。MergeKit提供了一个可扩展框架,能在任何硬件上高效合并模型,为研究人员和实践者提供实用支持。迄今为止,开源社区已合并了数千个模型,根据Open LLM排行榜的评估,这些合并产生了全球最强大的部分开源模型检查点。该库可通过https://github.com/arcee-ai/MergeKit访问。