In computer vision, Image Difference Captioning (IDC) is crucial for accurately describing variations between closely related images. Traditional IDC methods often rely on specialist models, which restrict their applicability across varied contexts. This paper introduces the OneDiff model, a novel generalist approach that utilizes a robust vision-language model architecture, integrating a siamese image encoder with a Visual Delta Module. This innovative configuration allows for the precise detection and articulation of fine-grained differences between image pairs. OneDiff is trained through a dual-phase strategy, encompassing Coupled Sample Training and multi-task learning across a diverse array of data types, supported by our newly developed DiffCap Dataset. This dataset merges real-world and synthetic data, enhancing the training process and bolstering the model's robustness. Extensive testing on diverse IDC benchmarks, such as Spot-the-Diff, Image-Editing-Request, and Birds-to-Words, shows that OneDiff consistently outperforms existing state-of-the-art models in accuracy and adaptability, achieving improvements of up to 97% CIDEr points in average. By setting a new benchmark in IDC, OneDiff paves the way for more versatile and effective applications in detecting and describing visual differences. The code, models, and data will be made publicly available.
翻译:在计算机视觉领域,图像差异描述(IDC)对于准确描述紧密相关图像之间的变化至关重要。传统的IDC方法通常依赖于专用模型,这限制了其在多样化场景中的适用性。本文提出了OneDiff模型,这是一种新颖的通用方法,采用强大的视觉-语言模型架构,集成了孪生图像编码器与视觉差异模块。这种创新配置能够精确检测并清晰表述图像对之间的细粒度差异。OneDiff通过双阶段策略进行训练,包括耦合样本训练和跨多种数据类型的多任务学习,并依托我们新开发的DiffCap数据集。该数据集融合了真实世界数据与合成数据,增强了训练过程并提升了模型的鲁棒性。在Spot-the-Diff、Image-Editing-Request和Birds-to-Words等多种IDC基准测试上的广泛实验表明,OneDiff在准确性和适应性方面持续超越现有最先进模型,平均CIDEr分数提升高达97%。通过在IDC领域树立新标杆,OneDiff为检测和描述视觉差异的更通用、更高效应用铺平了道路。代码、模型和数据将公开提供。