The rapid advancements in Large Language Models (LLMs) have significantly expanded their applications, ranging from multilingual support to domain-specific tasks and multimodal integration. In this paper, we present OmniEvalKit, a novel benchmarking toolbox designed to evaluate LLMs and their omni-extensions across multilingual, multidomain, and multimodal capabilities. Unlike existing benchmarks that often focus on a single aspect, OmniEvalKit provides a modular, lightweight, and automated evaluation system. It is structured with a modular architecture comprising a Static Builder and Dynamic Data Flow, promoting the seamless integration of new models and datasets. OmniEvalKit supports over 100 LLMs and 50 evaluation datasets, covering comprehensive evaluations across thousands of model-dataset combinations. OmniEvalKit is dedicated to creating an ultra-lightweight and fast-deployable evaluation framework, making downstream applications more convenient and versatile for the AI community.
翻译:大语言模型(LLMs)的快速发展极大地拓展了其应用范围,从多语言支持到特定领域任务,再到多模态集成。本文提出了OmniEvalKit,这是一个新颖的基准测试工具箱,旨在评估LLMs及其在多语言、多领域和多模态能力方面的全能扩展。与通常只关注单一方面的现有基准不同,OmniEvalKit提供了一个模块化、轻量级且自动化的评估系统。它采用模块化架构设计,包含静态构建器和动态数据流,促进了新模型和数据集的无缝集成。OmniEvalKit支持超过100个LLMs和50个评估数据集,涵盖数千种模型-数据集组合的全面评估。OmniEvalKit致力于创建一个超轻量级且可快速部署的评估框架,旨在为AI社区的下游应用提供更便捷、更多样化的支持。