News image captioning requires model to generate an informative caption rich in entities, with the news image and the associated news article. Though Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in addressing various vision-language tasks, our research finds that current MLLMs still bear limitations in handling entity information on news image captioning task. Besides, while MLLMs have the ability to process long inputs, generating high-quality news image captions still requires a trade-off between sufficiency and conciseness of textual input information. To explore the potential of MLLMs and address problems we discovered, we propose : an Entity-Aware Multimodal Alignment based approach for news image captioning. Our approach first aligns the MLLM through Balance Training Strategy with two extra alignment tasks: Entity-Aware Sentence Selection task and Entity Selection task, together with News Image Captioning task, to enhance its capability in handling multimodal entity information. The aligned MLLM will utilizes the additional entity-related information it explicitly extracts to supplement its textual input while generating news image captions. Our approach achieves better results than all previous models in CIDEr score on GoodNews dataset (72.33 -> 88.39) and NYTimes800k dataset (70.83 -> 85.61).
翻译:新闻图像描述要求模型利用新闻图像和关联的新闻文章生成富含实体的信息性描述。尽管多模态大语言模型(MLLMs)在各类视觉-语言任务中展现出卓越能力,本研究发现当前MLLMs在处理新闻图像描述任务中的实体信息时仍存在局限。此外,虽然MLLMs具备长文本输入处理能力,但生成高质量的新闻图像描述仍需在文本输入信息的充分性与简洁性之间取得平衡。为探索MLLMs的潜力并解决上述问题,我们提出了一种基于实体感知多模态对齐(Entity-Aware Multimodal Alignment)的新闻图像描述方法。该方法首先通过平衡训练策略(Balance Training Strategy)对MLLM进行对齐,该策略包含两个额外的对齐任务(实体感知句子选择任务和实体选择任务)与新闻图像描述任务,以增强模型处理多模态实体信息的能力。对齐后的MLLM将利用显式提取的额外实体相关信息补充文本输入,同时生成新闻图像描述。我们的方法在GoodNews数据集(CIDEr评分从72.33提升至88.39)和NYTimes800k数据集(从70.83提升至85.61)上均取得了优于所有先前模型的效果。