The advent of large vision-language models (LVLMs) represents a noteworthy advancement towards the pursuit of artificial general intelligence. However, the extent of their efficacy across both specialized and general tasks warrants further investigation. This article endeavors to evaluate the competency of popular LVLMs in specialized and general tasks, respectively, aiming to offer a comprehensive comprehension of these innovative methodologies. To gauge their efficacy in specialized tasks, we tailor a comprehensive testbed comprising three distinct scenarios: natural, healthcare, and industrial, encompassing six challenging tasks. These tasks include salient, camouflaged, and transparent object detection, as well as polyp and skin lesion detection, alongside industrial anomaly detection. We examine the performance of three recent open-source LVLMs -- MiniGPT-v2, LLaVA-1.5, and Shikra -- in the realm of visual recognition and localization. Moreover, we conduct empirical investigations utilizing the aforementioned models alongside GPT-4V, assessing their multi-modal understanding capacities in general tasks such as object counting, absurd question answering, affordance reasoning, attribute recognition, and spatial relation reasoning. Our investigations reveal that these models demonstrate limited proficiency not only in specialized tasks but also in general tasks. We delve deeper into this inadequacy and suggest several potential factors, including limited cognition in specialized tasks, object hallucination, text-to-image interference, and decreased robustness in complex problems. We hope this study would provide valuable insights for the future development of LVLMs, augmenting their power in coping with both general and specialized applications.
翻译:大型视觉语言模型(LVLMs)的出现代表着通往通用人工智能道路上的显著进展。然而,它们在专业任务与通用任务中的实际效能仍有待深入探究。本文致力于评估主流LVLMs在专业任务与通用任务中的表现,旨在全面理解这些创新方法。为衡量其在专业任务中的有效性,我们构建了包含自然场景、医疗场景和工业场景三大类别的综合测试平台,涵盖六项具有挑战性的任务:显著目标检测、伪装目标检测、透明目标检测、息肉检测、皮肤病变检测及工业异常检测。我们评估了三个近期开源的LVLMs(MiniGPT-v2、LLaVA-1.5和Shikra)在视觉识别与定位方面的性能。此外,我们还利用上述模型及GPT-4V开展实证研究,考察它们在通用任务中的多模态理解能力,包括目标计数、荒谬问答、功能推理、属性识别和空间关系推理。研究结果表明,这些模型不仅在专业任务中表现有限,在通用任务中同样存在不足。我们深入分析了这一缺陷,并提出若干潜在因素,包括专业任务中的认知局限、目标幻觉、文本-图像干扰以及复杂问题下的鲁棒性下降。希望本研究能为未来LVLMs的发展提供有益启示,增强其应对通用与专业应用的能力。