Large Language Models (LLMs) are key technologies driving intelligent systems to handle multiple tasks. To meet the demands of various tasks, an increasing number of LLMs-driven experts with diverse capabilities have been developed, accompanied by corresponding benchmarks to evaluate their performance. This paper proposes the Bench-CoE framework, which enables Collaboration of Experts (CoE) by effectively leveraging benchmark evaluations to achieve optimal performance across various tasks. Bench-CoE includes a set of expert models, a router for assigning tasks to corresponding experts, and a benchmark dataset for training the router. Moreover, we formulate Query-Level and Subject-Level approaches based on our framework, and analyze the merits and drawbacks of these two approaches. Finally, we conduct a series of experiments with vary data distributions on both language and multimodal tasks to validate that our proposed Bench-CoE outperforms any single model in terms of overall performance. We hope this method serves as a baseline for further research in this area. The code is available at \url{https://github.com/ZhangXJ199/Bench-CoE}.
翻译:大型语言模型(LLM)是驱动智能系统处理多任务的关键技术。为满足多样化任务需求,越来越多具备不同能力的LLM驱动专家被开发出来,同时伴随相应的基准测试以评估其性能。本文提出Bench-CoE框架,通过有效利用基准测试评估实现专家协作(CoE),从而在各种任务上获得最优性能。Bench-CoE包含一组专家模型、用于将任务分配给对应专家的路由模块,以及用于训练路由模块的基准数据集。此外,我们基于该框架提出了查询级和主题级两种实现方法,并分析了这两种方法的优缺点。最后,我们在语言和多模态任务上通过一系列不同数据分布的实验验证了所提出的Bench-CoE在整体性能上优于任何单一模型。我们希望该方法能为该领域的进一步研究提供基准参考。代码发布于 \url{https://github.com/ZhangXJ199/Bench-CoE}。