Recent Large Vision-Language Models (LVLMs) demonstrate impressive abilities on numerous image understanding and reasoning tasks. The task of fine-grained object classification (e.g., distinction between \textit{animal species}), however, has been probed insufficiently, despite its downstream importance. We fill this evaluation gap by creating \texttt{FOCI} (\textbf{F}ine-grained \textbf{O}bject \textbf{C}lass\textbf{I}fication), a difficult multiple-choice benchmark for fine-grained object classification, from existing object classification datasets: (1) multiple-choice avoids ambiguous answers associated with casting classification as open-ended QA task; (2) we retain classification difficulty by mining negative labels with a CLIP model. \texttt{FOCI}\xspace complements five popular classification datasets with four domain-specific subsets from ImageNet-21k. We benchmark 12 public LVLMs on \texttt{FOCI} and show that it tests for a \textit{complementary skill} to established image understanding and reasoning benchmarks. Crucially, CLIP models exhibit dramatically better performance than LVLMs. Since the image encoders of LVLMs come from these CLIP models, this points to inadequate alignment for fine-grained object distinction between the encoder and the LLM and warrants (pre)training data with more fine-grained annotation. We release our code at \url{https://github.com/gregor-ge/FOCI-Benchmark}.
翻译:近期的大型视觉-语言模型(LVLMs)在众多图像理解与推理任务中展现出令人瞩目的能力。然而,细粒度物体分类任务(例如区分不同动物物种)尽管在下游应用中具有重要意义,却尚未得到充分探究。为填补这一评估空白,我们基于现有物体分类数据集构建了 \texttt{FOCI}(细粒度物体分类基准),这是一个具有挑战性的细粒度物体分类多项选择题基准:(1)采用多项选择形式避免了将分类任务转化为开放式问答时可能产生的答案歧义;(2)通过使用 CLIP 模型挖掘负样本标签,保持了分类任务的难度。\texttt{FOCI} 在五个常用分类数据集的基础上,补充了来自 ImageNet-21k 的四个领域特定子集。我们在 \texttt{FOCI} 上对 12 个公开的 LVLM 进行了基准测试,结果表明该基准测试的是与现有图像理解及推理基准形成互补的能力。关键发现是,CLIP 模型的表现显著优于 LVLMs。由于 LVLMs 的图像编码器源自这些 CLIP 模型,这一差距表明编码器与大语言模型(LLM)之间针对细粒度物体区分的对齐尚不充分,需要包含更多细粒度标注的(预)训练数据。我们在 \url{https://github.com/gregor-ge/FOCI-Benchmark} 公开了相关代码。