The task of image captioning demands an algorithm to generate natural language descriptions of visual inputs. Recent advancements have seen a convergence between image captioning research and the development of Large Language Models (LLMs) and Multimodal LLMs -- like GPT-4V and Gemini -- which extend the capabilities of text-only LLMs to multiple modalities. This paper investigates whether Multimodal LLMs can supplant traditional image captioning networks by evaluating their performance on various image description benchmarks. We explore both the zero-shot capabilities of these models and their adaptability to different semantic domains through fine-tuning methods, including prompt learning, prefix tuning, and low-rank adaptation. Our results demonstrate that while Multimodal LLMs achieve impressive zero-shot performance, fine-tuning for specific domains while maintaining their generalization capabilities intact remains challenging. We discuss the implications of these findings for future research in image captioning and the development of more adaptable Multimodal LLMs.
翻译:图像描述生成任务要求算法能够为视觉输入生成自然语言描述。近期研究进展表明,图像描述生成领域与大型语言模型(LLMs)及多模态LLMs(如GPT-4V和Gemini)的发展呈现融合趋势——这些模型将纯文本LLMs的能力扩展至多模态领域。本文通过评估多模态LLMs在多种图像描述基准测试中的表现,探究其能否取代传统图像描述生成网络。我们既考察了这些模型的零样本能力,也通过提示学习、前缀调优和低秩自适应等微调方法,探索了它们对不同语义领域的适应性。实验结果表明:虽然多模态LLMs在零样本场景下表现优异,但针对特定领域进行微调同时保持其泛化能力仍具挑战性。我们讨论了这些发现对未来图像描述生成研究及开发更具适应性的多模态LLMs的启示。