Visual Question-Answering (VQA) has become key to user experience, particularly after improved generalization capabilities of Vision-Language Models (VLMs). But evaluating VLMs for an application requirement using a standardized framework in practical settings is still challenging. This paper aims to solve that using an end-to-end framework. We present VQA360 - a novel dataset derived from established VQA benchmarks, annotated with task types, application domains, and knowledge types, for a comprehensive evaluation. We also introduce GoEval, a multimodal evaluation metric developed using GPT-4o, achieving a correlation factor of 56.71% with human judgments. Our experiments with state-of-the-art VLMs reveal that no single model excels universally, thus, making a right choice a key design decision. Proprietary models such as Gemini-1.5-Pro and GPT-4o-mini generally outperform others, but open-source models like InternVL-2-8B and CogVLM-2-Llama-3-19B also demonstrate competitive strengths, while providing additional advantages. Our framework can also be extended to other tasks.
翻译:视觉问答(VQA)已成为用户体验的关键,尤其是在视觉-语言模型(VLM)的泛化能力得到提升之后。然而,在实际应用中,使用标准化框架根据应用需求评估VLM仍然具有挑战性。本文旨在通过一个端到端的框架来解决这一问题。我们提出了VQA360——一个从成熟的VQA基准数据集衍生出的新颖数据集,并标注了任务类型、应用领域和知识类型,以进行全面的评估。我们还介绍了GoEval,这是一个利用GPT-4o开发的多模态评估指标,其与人类判断的相关系数达到了56.71%。我们对最先进的VLM进行的实验表明,没有单一模型能在所有方面都表现出色,因此,做出正确的选择成为一个关键的设计决策。专有模型如Gemini-1.5-Pro和GPT-4o-mini通常表现优于其他模型,但开源模型如InternVL-2-8B和CogVLM-2-Llama-3-19B也展现出具有竞争力的优势,同时提供了额外的优点。我们的框架也可以扩展到其他任务。