We introduce INQUIRE, a text-to-image retrieval benchmark designed to challenge multimodal vision-language models on expert-level queries. INQUIRE includes iNaturalist 2024 (iNat24), a new dataset of five million natural world images, along with 250 expert-level retrieval queries. These queries are paired with all relevant images comprehensively labeled within iNat24, comprising 33,000 total matches. Queries span categories such as species identification, context, behavior, and appearance, emphasizing tasks that require nuanced image understanding and domain expertise. Our benchmark evaluates two core retrieval tasks: (1) INQUIRE-Fullrank, a full dataset ranking task, and (2) INQUIRE-Rerank, a reranking task for refining top-100 retrievals. Detailed evaluation of a range of recent multimodal models demonstrates that INQUIRE poses a significant challenge, with the best models failing to achieve an mAP@50 above 50%. In addition, we show that reranking with more powerful multimodal models can enhance retrieval performance, yet there remains a significant margin for improvement. By focusing on scientifically-motivated ecological challenges, INQUIRE aims to bridge the gap between AI capabilities and the needs of real-world scientific inquiry, encouraging the development of retrieval systems that can assist with accelerating ecological and biodiversity research. Our dataset and code are available at https://inquire-benchmark.github.io
翻译:我们提出了INQUIRE,这是一个旨在通过专家级查询挑战多模态视觉-语言模型的文本到图像检索基准。INQUIRE包含iNaturalist 2024(iNat24)——一个包含五百万张自然世界图像的新数据集,以及250个专家级检索查询。这些查询与iNat24中全面标注的所有相关图像配对,总计包含33,000个匹配项。查询涵盖物种识别、环境、行为与外观等类别,重点关注需要细致图像理解与领域专业知识的任务。我们的基准评估两个核心检索任务:(1)INQUIRE-Fullrank,一个全数据集排序任务;(2)INQUIRE-Rerank,一个用于优化前100个检索结果的重新排序任务。对一系列最新多模态模型的详细评估表明,INQUIRE构成了显著挑战,最佳模型的mAP@50均未超过50%。此外,我们发现使用更强大的多模态模型进行重新排序可以提升检索性能,但仍存在巨大的改进空间。通过聚焦于科学驱动的生态学挑战,INQUIRE旨在弥合人工智能能力与现实科学研究需求之间的差距,推动开发能够助力加速生态学与生物多样性研究的检索系统。我们的数据集与代码公开于https://inquire-benchmark.github.io。