Image captioning has become an essential Vision & Language research task. It is about predicting the most accurate caption given a specific image or video. The research community has achieved impressive results by continuously proposing new models and approaches to improve the overall model's performance. Nevertheless, despite increasing proposals, the performance metrics used to measure their advances have remained practically untouched through the years. A probe of that, nowadays metrics like BLEU, METEOR, CIDEr, and ROUGE are still very used, aside from more sophisticated metrics such as BertScore and ClipScore. Hence, it is essential to adjust how are measure the advances, limitations, and scopes of the new image captioning proposals, as well as to adapt new metrics to these new advanced image captioning approaches. This work proposes a new evaluation metric for the image captioning problem. To do that, first, it was generated a human-labeled dataset to assess to which degree the captions correlate with the image's content. Taking these human scores as ground truth, we propose a new metric, and compare it with several well-known metrics, from classical to newer ones. Outperformed results were also found, and interesting insights were presented and discussed.
翻译:图像描述已成为视觉与语言研究领域的关键任务,其核心在于针对给定图像或视频预测最准确的文字描述。研究界通过不断提出新模型与方法以提升整体性能,已取得显著成果。然而,尽管新方法层出不穷,用于衡量进展的性能评估指标多年来却鲜有革新。当前除BertScore、ClipScore等较复杂的指标外,BLEU、METEOR、CIDEr和ROUGE等传统指标仍被广泛使用。因此,亟需调整评估新图像描述方法进展、局限与适用范围的方式,并使新指标适配这些先进的图像描述方法。本研究针对图像描述问题提出了一种新型评估指标。首先构建了人工标注数据集,用于评估描述文本与图像内容的相关程度。以人工评分为基准,我们提出了新指标,并将其与从经典到前沿的多种知名指标进行比较。实验结果表明该指标具有优越性能,文中同时展示并讨论了若干具有启发性的发现。