Harnessing visual texts represents a burgeoning frontier in the evolution of language modeling. In this paper, we introduce a novel pre-training framework for a suite of pixel-based autoregressive language models, pre-training on a corpus of over 400 million documents rendered as RGB images. Our approach is characterized by a dual-modality training regimen, engaging both visual data through next patch prediction with a regression head and textual data via next token prediction with a classification head. This study is particularly focused on investigating the synergistic interplay between visual and textual modalities of language. Our comprehensive evaluation across a diverse array of benchmarks reveals that the confluence of visual and textual data substantially augments the efficacy of pixel-based language models. Notably, our findings show that a unidirectional pixel-based model, devoid of textual data during training, can match the performance levels of advanced bidirectional pixel-based models on various language understanding benchmarks. This work highlights the considerable untapped potential of integrating visual and textual information for language modeling purposes. We will release our code, data, and checkpoints to inspire further research advancement.
翻译:利用视觉文本代表了语言模型演进中的一个新兴前沿领域。本文提出了一种新颖的预训练框架,适用于一套基于像素的自回归语言模型,该框架在包含超过4亿份以RGB图像形式呈现的文档语料库上进行预训练。我们的方法采用双模态训练机制:一方面通过回归头进行下一图像块预测处理视觉数据,另一方面通过分类头进行下一词元预测处理文本数据。本研究聚焦于探究语言中视觉与文本模态之间的协同交互作用。我们在多样化的基准测试集中进行的综合评估表明,视觉与文本数据的融合显著增强了基于像素的语言模型的效果。值得注意的是,我们的研究结果显示,在训练过程中完全不含文本数据的单向像素模型,能够在多种语言理解基准测试中达到先进双向像素模型的表现水平。这项工作凸显了将视觉与文本信息整合用于语言建模的巨大未开发潜力。我们将公开代码、数据和模型检查点,以期推动进一步的研究进展。