The advances of large foundation models necessitate wide-coverage, low-cost, and zero-contamination benchmarks. Despite continuous exploration of language model evaluations, comprehensive studies on the evaluation of Large Multi-modal Models (LMMs) remain limited. In this work, we introduce LMMS-EVAL, a unified and standardized multimodal benchmark framework with over 50 tasks and more than 10 models to promote transparent and reproducible evaluations. Although LMMS-EVAL offers comprehensive coverage, we find it still falls short in achieving low cost and zero contamination. To approach this evaluation trilemma, we further introduce LMMS-EVAL LITE, a pruned evaluation toolkit that emphasizes both coverage and efficiency. Additionally, we present Multimodal LIVEBENCH that utilizes continuously updating news and online forums to assess models' generalization abilities in the wild, featuring a low-cost and zero-contamination evaluation approach. In summary, our work highlights the importance of considering the evaluation trilemma and provides practical solutions to navigate the trade-offs in evaluating large multi-modal models, paving the way for more effective and reliable benchmarking of LMMs. We opensource our codebase and maintain leaderboard of LIVEBENCH at https://github.com/EvolvingLMMs-Lab/lmms-eval and https://huggingface.co/spaces/lmms-lab/LiveBench.
翻译:大型基础模型的进步需要覆盖面广、成本低且无污染的基准测试。尽管对语言模型评估的探索不断深入,但对大型多模态模型评估的综合研究仍然有限。本研究提出LMMS-EVAL,一个统一标准化的多模态基准框架,涵盖超过50项任务和10余个模型,旨在推动透明且可复现的评估。虽然LMMS-EVAL提供了全面覆盖,但我们发现其在实现低成本与零污染方面仍存在不足。为逼近这一评估三难问题,我们进一步推出LMMS-EVAL LITE——一个兼顾覆盖度与效率的剪枝评估工具包。此外,我们提出多模态LIVEBENCH,通过持续更新的新闻和在线论坛数据评估模型在真实场景中的泛化能力,采用低成本且零污染的评估方法。综上所述,本研究强调考量评估三难问题的重要性,并为权衡大型多模态模型评估中的矛盾提供了实用解决方案,为更有效可靠的大型多模态模型基准测试铺平道路。我们在https://github.com/EvolvingLMMs-Lab/lmms-eval 开源代码库,并在https://huggingface.co/spaces/lmms-lab/LiveBench 维护LIVEBENCH排行榜。