We introduce INQUIRE, a text-to-image retrieval benchmark designed to challenge multimodal vision-language models on expert-level queries. INQUIRE includes iNaturalist 2024 (iNat24), a new dataset of five million natural world images, along with 250 expert-level retrieval queries. These queries are paired with all relevant images comprehensively labeled within iNat24, comprising 33,000 total matches. Queries span categories such as species identification, context, behavior, and appearance, emphasizing tasks that require nuanced image understanding and domain expertise. Our benchmark evaluates two core retrieval tasks: (1) INQUIRE-Fullrank, a full dataset ranking task, and (2) INQUIRE-Rerank, a reranking task for refining top-100 retrievals. Detailed evaluation of a range of recent multimodal models demonstrates that INQUIRE poses a significant challenge, with the best models failing to achieve an mAP@50 above 50%. In addition, we show that reranking with more powerful multimodal models can enhance retrieval performance, yet there remains a significant margin for improvement. By focusing on scientifically-motivated ecological challenges, INQUIRE aims to bridge the gap between AI capabilities and the needs of real-world scientific inquiry, encouraging the development of retrieval systems that can assist with accelerating ecological and biodiversity research. Our dataset and code are available at https://inquire-benchmark.github.io
翻译:本文提出INQUIRE,这是一个旨在通过专家级查询挑战多模态视觉-语言模型的文本-图像检索基准。INQUIRE包含iNaturalist 2024(iNat24)——一个包含五百万张自然世界图像的新数据集,以及250个专家级检索查询。这些查询与iNat24中全面标注的所有相关图像配对,总计包含33,000个匹配对。查询涵盖物种识别、环境背景、行为特征和外观形态等类别,重点关注需要精细图像理解与领域专业知识的任务。本基准评估两个核心检索任务:(1)INQUIRE-Fullrank:全数据集排序任务;(2)INQUIRE-Rerank:对前100个检索结果进行重排序的任务。通过对一系列最新多模态模型的详细评估,结果表明INQUIRE构成了显著挑战,最佳模型的mAP@50指标仍低于50%。此外,我们发现采用更强大的多模态模型进行重排序可以提升检索性能,但仍存在显著的改进空间。通过聚焦于科学驱动的生态学挑战,INQUIRE旨在弥合人工智能能力与现实科学探究需求之间的差距,推动开发能够加速生态学与生物多样性研究的检索系统。我们的数据集与代码公开于https://inquire-benchmark.github.io。