Zero-shot composed image retrieval (ZS-CIR) is a rapidly growing area with significant practical applications, allowing users to retrieve a target image by providing a reference image and a relative caption describing the desired modifications. Existing ZS-CIR methods often struggle to capture fine-grained changes and integrate visual and semantic information effectively. They primarily rely on either transforming the multimodal query into a single text using image-to-text models or employing large language models for target image description generation, approaches that often fail to capture complementary visual information and complete semantic context. To address these limitations, we propose a novel Fine-Grained Zero-Shot Composed Image Retrieval method with Complementary Visual-Semantic Integration (CVSI). Specifically, CVSI leverages three key components: (1) Visual Information Extraction, which not only extracts global image features but also uses a pre-trained mapping network to convert the image into a pseudo token, combining it with the modification text and the objects most likely to be added. (2) Semantic Information Extraction, which involves using a pre-trained captioning model to generate multiple captions for the reference image, followed by leveraging an LLM to generate the modified captions and the objects most likely to be added. (3) Complementary Information Retrieval, which integrates information extracted from both the query and database images to retrieve the target image, enabling the system to efficiently handle retrieval queries in a variety of situations. Extensive experiments on three public datasets (e.g., CIRR, CIRCO, and FashionIQ) demonstrate that CVSI significantly outperforms existing state-of-the-art methods. Our code is available at https://github.com/yyc6631/CVSI.
翻译:零样本组合图像检索(ZS-CIR)是一个快速发展的领域,具有重要的实际应用价值,它允许用户通过提供参考图像和描述期望修改的相对文本来检索目标图像。现有的ZS-CIR方法通常难以捕捉细粒度变化并有效整合视觉与语义信息。这些方法主要依赖于使用图像到文本模型将多模态查询转换为单一文本,或利用大语言模型生成目标图像描述,此类方法往往无法捕捉互补的视觉信息和完整的语义上下文。为应对这些局限性,我们提出了一种新颖的细粒度零样本组合图像检索方法——互补视觉语义集成(CVSI)。具体而言,CVSI利用三个关键组件:(1)视觉信息提取,不仅提取全局图像特征,还使用预训练的映射网络将图像转换为伪令牌,并将其与修改文本及最可能添加的物体相结合。(2)语义信息提取,涉及使用预训练的标题生成模型为参考图像生成多个标题,随后利用大语言模型生成修改后的标题及最可能添加的物体。(3)互补信息检索,整合从查询和数据库图像中提取的信息以检索目标图像,使系统能够高效处理各种情况下的检索查询。在三个公共数据集(如CIRR、CIRCO和FashionIQ)上进行的大量实验表明,CVSI显著优于现有的最先进方法。我们的代码可在 https://github.com/yyc6631/CVSI 获取。