With the rapid advancement of digitalization, various document images are being applied more extensively in production and daily life, and there is an increasingly urgent need for fast and accurate parsing of the content in document images. Therefore, this report presents PP-DocBee, a novel multimodal large language model designed for end-to-end document image understanding. First, we develop a data synthesis strategy tailored to document scenarios in which we build a diverse dataset to improve the model generalization. Then, we apply a few training techniques, including dynamic proportional sampling, data preprocessing, and OCR postprocessing strategies. Extensive evaluations demonstrate the superior performance of PP-DocBee, achieving state-of-the-art results on English document understanding benchmarks and even outperforming existing open source and commercial models in Chinese document understanding. The source code and pre-trained models are publicly available at \href{https://github.com/PaddlePaddle/PaddleMIX}{https://github.com/PaddlePaddle/PaddleMIX}.
翻译:随着数字化进程的快速推进,各类文档图像在生产和日常生活中得到更广泛的应用,对文档图像内容进行快速准确解析的需求日益迫切。因此,本报告提出了PP-DocBee,一种专为端到端文档图像理解而设计的新型多模态大语言模型。首先,我们开发了一种针对文档场景的数据合成策略,构建了一个多样化的数据集以提升模型的泛化能力。随后,我们应用了若干训练技巧,包括动态比例采样、数据预处理以及OCR后处理策略。大量评估结果表明,PP-DocBee性能卓越,在英文文档理解基准测试中取得了最先进的结果,甚至在中文文档理解任务上超越了现有的开源和商业模型。源代码与预训练模型已在 \href{https://github.com/PaddlePaddle/PaddleMIX}{https://github.com/PaddlePaddle/PaddleMIX} 公开。