We introduce Visual Caption Restoration (VCR), a novel vision-language task that challenges models to accurately restore partially obscured texts using pixel-level hints within images. This task stems from the observation that text embedded in images is intrinsically different from common visual elements and natural language due to the need to align the modalities of vision, text, and text embedded in images. While numerous works have integrated text embedded in images into visual question-answering tasks, approaches to these tasks generally rely on optical character recognition or masked language modeling, thus reducing the task to mainly text-based processing. However, text-based processing becomes ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. With this pipeline, we construct a dataset for VCR called VCR-Wiki using images with captions from Wikipedia, comprising 2.11M English and 346K Chinese entities in both easy and hard split variants. Our results reveal that current vision language models significantly lag behind human performance in the VCR task, and merely fine-tuning the models on our dataset does not lead to notable improvements. We release VCR-Wiki and the data construction code to facilitate future research.
翻译:我们提出视觉字幕恢复(VCR)这一新颖的视觉-语言任务,该任务要求模型利用图像中的像素级提示准确恢复部分被遮蔽的文本。此任务源于以下观察:由于需要对齐视觉模态、文本模态以及嵌入图像中的文本模态,嵌入图像中的文本本质上不同于常见的视觉元素和自然语言。尽管已有大量工作将嵌入图像中的文本整合到视觉问答任务中,但这些任务的解决方法通常依赖于光学字符识别或掩码语言建模,从而将任务简化为以文本为主的处理。然而,在VCR任务中,基于文本的处理方法效果有限,因为准确的文本恢复依赖于所提供图像、上下文以及被遮蔽文本微小暴露区域所隐含的细微线索的综合信息。我们开发了一套流程,利用图像-字幕对为VCR任务生成合成图像,并通过可调节的字幕可见性来控制任务难度。基于此流程,我们使用维基百科中带字幕的图像构建了名为VCR-Wiki的数据集,该数据集包含211万英文实体和34.6万中文实体,并分为简单与困难两种划分变体。实验结果表明,当前视觉语言模型在VCR任务上的表现显著落后于人类水平,仅在我们的数据集上进行微调并不能带来明显提升。我们公开VCR-Wiki数据集及数据构建代码,以促进未来研究。