Zero-shot image captioning (IC) without well-paired image-text data can be divided into two categories, training-free and text-only-training. Generally, these two types of methods realize zero-shot IC by integrating pretrained vision-language models like CLIP for image-text similarity evaluation and a pre-trained language model (LM) for caption generation. The main difference between them is whether using a textual corpus to train the LM. Though achieving attractive performance w.r.t. some metrics, existing methods often exhibit some common drawbacks. Training-free methods tend to produce hallucinations, while text-only-training often lose generalization capability. To move forward, in this paper, we propose a novel Memory-Augmented zero-shot image Captioning framework (MeaCap). Specifically, equipped with a textual memory, we introduce a retrieve-then-filter module to get key concepts that are highly related to the image. By deploying our proposed memory-augmented visual-related fusion score in a keywords-to-sentence LM, MeaCap can generate concept-centered captions that keep high consistency with the image with fewer hallucinations and more world-knowledge. The framework of MeaCap achieves the state-of-the-art performance on a series of zero-shot IC settings. Our code is available at https://github.com/joeyz0z/MeaCap.
翻译:零样本图像描述(IC)在缺乏良好配对的图像-文本数据时可划分为两类:免训练方法与纯文本训练方法。通常,这两类方法通过整合预训练的视觉-语言模型(如CLIP)进行图像-文本相似度评估,并利用预训练语言模型(LM)生成描述。二者主要区别在于是否使用文本语料库训练LM。尽管现有方法在某些指标上取得了令人瞩目的性能,但它们普遍存在一些缺陷:免训练方法易产生幻觉现象,而纯文本训练方法常丧失泛化能力。为推进该领域发展,本文提出一种新颖的记忆增强零样本图像描述框架(MeaCap)。具体而言,借助文本记忆模块,我们引入"检索-过滤"模块以获取与图像高度相关的关键概念。通过在关键词到句子语言模型中部署所提出的记忆增强视觉关联融合评分,MeaCap能够生成以概念为中心的描述,这些描述与图像保持高度一致性,同时减少幻觉并融入更多世界知识。该框架在多个零样本图像描述设定中达到了最先进性能。我们的代码已开源至https://github.com/joeyz0z/MeaCap。