State-of-The-Art (SoTA) image captioning models are often trained on the MicroSoft Common Objects in Context (MS-COCO) dataset, which contains human-annotated captions with an average length of approximately ten tokens. Although effective for general scene understanding, these short captions often fail to capture complex scenes and convey detailed information. Moreover, captioning models tend to exhibit bias towards the ``average'' caption, which captures only the more general aspects, thus overlooking finer details. In this paper, we present a novel approach to generate richer and more informative image captions by combining the captions generated from different SoTA captioning models. Our proposed method requires no additional model training: given an image, it leverages pre-trained models from the literature to generate the initial captions, and then ranks them using a newly introduced image-text-based metric, which we name BLIPScore. Subsequently, the top two captions are fused using a Large Language Model (LLM) to produce the final, more detailed description. Experimental results on the MS-COCO and Flickr30k test sets demonstrate the effectiveness of our approach in terms of caption-image alignment and hallucination reduction according to the ALOHa, CAPTURE, and Polos metrics. A subjective study lends additional support to these results, suggesting that the captions produced by our model are generally perceived as more consistent with human judgment. By combining the strengths of diverse SoTA models, our method enhances the quality and appeal of image captions, bridging the gap between automated systems and the rich and informative nature of human-generated descriptions. This advance enables the generation of more suitable captions for the training of both vision-language and captioning models.
翻译:当前最先进的图像描述模型通常基于微软通用物体上下文数据集进行训练,该数据集包含的人工标注描述平均长度约为十个词元。虽然这些简短描述在通用场景理解方面表现有效,但往往难以捕捉复杂场景并传达详细信息。此外,描述模型容易偏向于"平均化"描述,仅捕捉更通用的方面,从而忽略了精细细节。本文提出一种创新方法,通过融合不同最先进描述模型生成的描述来产生更丰富、信息量更大的图像描述。我们提出的方法无需额外模型训练:给定图像时,该方法利用文献中的预训练模型生成初始描述,随后采用我们新提出的基于图像-文本的度量标准(命名为BLIPScore)对这些描述进行排序。接着,使用大型语言模型对前两个描述进行融合,生成最终更详细的描述。在MS-COCO和Flickr30k测试集上的实验结果表明,根据ALOHa、CAPTURE和Polos度量标准,我们的方法在描述-图像对齐和幻觉减少方面具有显著效果。主观研究进一步支持了这些结果,表明我们的模型生成的描述通常被认为更符合人类判断。通过结合多种最先进模型的优势,我们的方法提升了图像描述的质量和吸引力,缩小了自动化系统与人类生成描述之间丰富性和信息量的差距。这一进展为视觉语言模型和描述模型的训练提供了更合适的描述生成方案。