We introduce Visual Caption Restoration (VCR), a novel vision-language task that challenges models to accurately restore partially obscured texts using pixel-level hints within images. This task stems from the observation that text embedded in images is intrinsically different from common visual elements and natural language due to the need to align the modalities of vision, text, and text embedded in images. While numerous works have integrated text embedded in images into visual question-answering tasks, approaches to these tasks generally rely on optical character recognition or masked language modeling, thus reducing the task to mainly text-based processing. However, text-based processing becomes ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. With this pipeline, we construct a dataset for VCR called VCR-Wiki using images with captions from Wikipedia, comprising 2.11M English and 346K Chinese entities in both easy and hard split variants. Our results reveal that current vision language models significantly lag behind human performance in the VCR task, and merely fine-tuning the models on our dataset does not lead to notable improvements. We release VCR-Wiki and the data construction code to facilitate future research.
翻译:本文提出视觉字幕修复(VCR)这一新颖的视觉-语言任务,其挑战在于要求模型依据图像中的像素级提示,准确还原部分被遮蔽的文本。该任务的提出源于以下观察:由于需要对齐视觉模态、文本模态以及图像内嵌文本模态,图像中嵌入的文本本质上不同于常见的视觉元素与自然语言。尽管已有大量研究将图像内嵌文本整合至视觉问答任务中,但这些任务的实现方法通常依赖于光学字符识别或掩码语言建模,从而将任务简化为以文本为主的处理过程。然而,在VCR任务中,基于文本的处理方法将失效,因为准确的文本修复依赖于所提供图像、上下文信息以及被掩码文本微小暴露区域所隐含的细微线索的综合信息。我们开发了一套基于图像-字幕对生成VCR任务合成图像的流程,其中字幕可见度可调以控制任务难度。利用该流程,我们使用维基百科带字幕的图像构建了名为VCR-Wiki的数据集,包含211万英文实体与34.6万中文实体,并分为简单与困难两种划分版本。实验结果表明,当前视觉语言模型在VCR任务上的表现显著落后于人类水平,且仅在我们的数据集上进行微调并未带来明显性能提升。我们公开VCR-Wiki数据集及数据构建代码以促进后续研究。