Multimodal Foundation Models (MMFMs) have shown remarkable performance on various computer vision and natural language processing tasks. However, their performance on particular tasks such as document understanding is still limited. They also require more compute, time, and engineering resources to finetune and deploy compared to traditional, unimodal models. In this report, we present Multimodal Structured Generation, a general framework which constrains the output logits of frozen MMFMs to force them to reason before responding with structured outputs that downstream APIs can parse and use. We provide a detailed account of our approach, including the technical details, theoretical discussions, and final evaluation results in the 2nd Multimodal Foundation Models Challenge hosted by the Computer Vision and Pattern Recognition (CVPR) conference. Our approach achieved the second highest score in the hidden test set for Phase 2 and third highest overall. This shows the method's ability to generalize to unseen tasks. And that simple engineering can beat expensive & complicated modelling steps as we first discussed in our paper, Retrieval Augmented Structured Generation: Business Document Information Extraction as Tool Use. All of our scripts, deployment steps, and evaluation results can be accessed in https://github.com/leloykun/MMFM-Challenge
翻译:多模态基础模型(MMFMs)在各种计算机视觉和自然语言处理任务中展现出卓越性能。然而,其在特定任务(如文档理解)上的表现仍存在局限。与传统单模态模型相比,MMFMs在微调与部署过程中需要更多的计算资源、时间与工程投入。本报告提出多模态结构化生成——一种通用框架,该框架通过约束冻结MMFM的输出逻辑值,强制模型在响应前进行推理,从而生成可供下游API解析使用的结构化输出。我们详细阐述了该方法的技术细节、理论探讨,以及在计算机视觉与模式识别(CVPR)会议主办的第二届多模态基础模型挑战赛中的最终评估结果。我们的方法在第二阶段隐藏测试集中取得第二高分,综合排名位列第三。这表明该方法具备对未见任务的泛化能力,同时印证了我们先前在论文《检索增强结构化生成:作为工具使用的商业文档信息抽取》中首次提出的观点:简洁的工程实现能够超越昂贵复杂的建模流程。所有代码脚本、部署步骤与评估结果均可通过https://github.com/leloykun/MMFM-Challenge 获取。