Multimodal Large Language Models (MLLM) have made significant progress in the field of document analysis. Despite this, existing benchmarks typically focus only on extracting text and simple layout information, neglecting the complex interactions between elements in structured documents such as mind maps and flowcharts. To address this issue, we introduce the new benchmark named MindBench, which not only includes meticulously constructed bilingual authentic or synthetic images, detailed annotations, evaluation metrics and baseline models, but also specifically designs five types of structured understanding and parsing tasks. These tasks include full parsing, partial parsing, position-related parsing, structured Visual Question Answering (VQA), and position-related VQA, covering key areas such as text recognition, spatial awareness, relationship discernment, and structured parsing. Extensive experimental results demonstrate the substantial potential and significant room for improvement in current models' ability to handle structured document information. We anticipate that the launch of MindBench will significantly advance research and application development in structured document analysis technology. MindBench is available at: https://miasanlei.github.io/MindBench.github.io/.
翻译:多模态大语言模型(MLLM)在文档分析领域已取得显著进展。尽管如此,现有基准通常仅关注提取文本和简单的布局信息,忽略了思维导图、流程图等结构化文档中元素之间复杂的交互关系。为解决此问题,我们推出了名为MindBench的新基准。该基准不仅包含精心构建的双语真实或合成图像、详细标注、评估指标和基线模型,还专门设计了五类结构化理解与解析任务。这些任务包括完全解析、部分解析、位置相关解析、结构化视觉问答(VQA)以及位置相关VQA,涵盖了文本识别、空间感知、关系辨别和结构化解析等关键领域。大量的实验结果表明,当前模型在处理结构化文档信息的能力上具有巨大潜力,同时也存在显著的改进空间。我们预计MindBench的发布将极大推动结构化文档分析技术的研究与应用发展。MindBench可通过以下网址访问:https://miasanlei.github.io/MindBench.github.io/。