We introduce InternVL 2.5, an advanced multimodal large language model (MLLM) series that builds upon InternVL 2.0, maintaining its core model architecture while introducing significant enhancements in training and testing strategies as well as data quality. In this work, we delve into the relationship between model scaling and performance, systematically exploring the performance trends in vision encoders, language models, dataset sizes, and test-time configurations. Through extensive evaluations on a wide range of benchmarks, including multi-discipline reasoning, document understanding, multi-image / video understanding, real-world comprehension, multimodal hallucination detection, visual grounding, multilingual capabilities, and pure language processing, InternVL 2.5 exhibits competitive performance, rivaling leading commercial models such as GPT-4o and Claude-3.5-Sonnet. Notably, our model is the first open-source MLLMs to surpass 70% on the MMMU benchmark, achieving a 3.7-point improvement through Chain-of-Thought (CoT) reasoning and showcasing strong potential for test-time scaling. We hope this model contributes to the open-source community by setting new standards for developing and applying multimodal AI systems. HuggingFace demo see https://huggingface.co/spaces/OpenGVLab/InternVL
翻译:我们推出InternVL 2.5系列——一种基于InternVL 2.0构建的先进多模态大语言模型(MLLM),在保持核心模型架构不变的同时,在训练与测试策略及数据质量方面实现了显著提升。本研究深入探讨了模型扩展与性能之间的关系,系统性地探索了视觉编码器、语言模型、数据集规模及测试时配置的性能变化趋势。通过在多学科推理、文档理解、多图像/视频理解、现实世界理解、多模态幻觉检测、视觉定位、多语言能力及纯语言处理等广泛基准测试中的全面评估,InternVL 2.5展现出与GPT-4o、Claude-3.5-Sonnet等领先商业模型相媲美的竞争力。值得注意的是,我们的模型是首个在MMMU基准测试中突破70%准确率的开源MLLM,通过思维链(CoT)推理实现了3.7个百分点的性能提升,并展现出测试时扩展的强大潜力。我们期望该模型能为开源社区作出贡献,为多模态人工智能系统的开发与应用树立新标准。HuggingFace演示详见 https://huggingface.co/spaces/OpenGVLab/InternVL