Manga, or Japanese comics, is a richly multimodal narrative form that blends images and text in complex ways. Teaching large multimodal models (LMMs) to understand such narratives at a human-like level could help manga creators reflect on and refine their stories. To this end, we introduce two benchmarks for multimodal manga understanding: MangaOCR, which targets in-page text recognition, and MangaVQA, a novel benchmark designed to evaluate contextual understanding through visual question answering. MangaVQA consists of 526 high-quality, manually constructed question-answer pairs, enabling reliable evaluation across diverse narrative and visual scenarios. Building on these benchmarks, we develop MangaLMM, a manga-specialized model finetuned from the open-source LMM Qwen2.5-VL to jointly handle both tasks. Through extensive experiments, including comparisons with proprietary models such as GPT-4o and Gemini 2.5, we assess how well LMMs understand manga. Our benchmark and model provide a comprehensive foundation for evaluating and advancing LMMs in the richly narrative domain of manga.
翻译:漫画(日本漫画)是一种高度多模态的叙事形式,以复杂的方式融合图像与文本。教导大型多模态模型(LMMs)以类人水平理解此类叙事,有助于漫画创作者反思并优化其故事。为此,我们引入了两个用于多模态漫画理解的基准:MangaOCR(专注于页面内文本识别)与MangaVQA(一个通过视觉问答评估上下文理解的新型基准)。MangaVQA包含526个高质量人工构建的问答对,能够在多样化的叙事与视觉场景中进行可靠评估。基于这些基准,我们开发了MangaLMM——一个从开源LMM Qwen2.5-VL微调而来的漫画专用模型,能够协同处理两项任务。通过包括与GPT-4o、Gemini 2.5等专有模型对比在内的广泛实验,我们评估了LMMs对漫画的理解能力。我们的基准与模型为在漫画这一富含叙事的领域中评估并推进LMMs提供了全面基础。