Multimodal large language models (MLLMs) are currently at the center of research attention, showing rapid progress in scale and capabilities, yet their intelligence, limitations, and risks remain insufficiently understood. To address these issues, particularly in the context of the Russian language, where no multimodal benchmarks currently exist, we introduce MERA Multi, an open multimodal evaluation framework for Russian-spoken architectures. The benchmark is instruction-based and encompasses default text, image, audio, and video modalities, comprising 18 newly constructed evaluation tasks for both general-purpose models and modality-specific architectures (imageto-text, video-to-text, and audio-to-text). Our contributions include: (i) a universal taxonomy of multimodal abilities; (ii) 18 datasets created entirely from scratch with attention to Russian cultural and linguistic specificity, unified prompts, and metrics; (iii) baseline results for both closed-source and open-source models; (iv) a methodology for preventing benchmark leakage, including watermarking for private sets. While our current focus is on Russian, the proposed benchmark provides a replicable methodology for constructing multimodal benchmarks in typologically diverse languages, particularly within the Slavic language family.
翻译:多模态大语言模型(MLLMs)目前处于研究关注的核心,其规模和能力正快速演进,然而其智能水平、局限性及风险仍未得到充分理解。为应对这些问题,特别是在俄语语境下——目前尚无现成的多模态评测基准——我们提出了MERA Multi,一个面向俄语架构的开放式多模态评估框架。该基准采用指令驱动设计,涵盖默认的文本、图像、音频和视频模态,包含18项全新构建的评估任务,既适用于通用模型,也适用于特定模态架构(图像到文本、视频到文本及音频到文本)。我们的贡献包括:(i)一套通用的多模态能力分类体系;(ii)完全从零构建的18个数据集,注重俄语文化及语言特性,并配备统一提示词与评估指标;(iii)闭源与开源模型的基线结果;(iv)包含私有数据集水印技术在内的基准泄露防范方法。尽管当前研究聚焦于俄语,但所提出的基准为在类型学多样的语言(尤其是斯拉夫语族)中构建多模态评测体系提供了可复现的方法论。