How do two sets of images differ? Discerning set-level differences is crucial for understanding model behaviors and analyzing datasets, yet manually sifting through thousands of images is impractical. To aid in this discovery process, we explore the task of automatically describing the differences between two $\textbf{sets}$ of images, which we term Set Difference Captioning. This task takes in image sets $D_A$ and $D_B$, and outputs a description that is more often true on $D_A$ than $D_B$. We outline a two-stage approach that first proposes candidate difference descriptions from image sets and then re-ranks the candidates by checking how well they can differentiate the two sets. We introduce VisDiff, which first captions the images and prompts a language model to propose candidate descriptions, then re-ranks these descriptions using CLIP. To evaluate VisDiff, we collect VisDiffBench, a dataset with 187 paired image sets with ground truth difference descriptions. We apply VisDiff to various domains, such as comparing datasets (e.g., ImageNet vs. ImageNetV2), comparing classification models (e.g., zero-shot CLIP vs. supervised ResNet), summarizing model failure modes (supervised ResNet), characterizing differences between generative models (e.g., StableDiffusionV1 and V2), and discovering what makes images memorable. Using VisDiff, we are able to find interesting and previously unknown differences in datasets and models, demonstrating its utility in revealing nuanced insights.
翻译:两组图像之间有何不同?识别集合层面的差异对于理解模型行为和分析数据集至关重要,但手动筛选数千张图像并不现实。为辅助这一发现过程,我们探索了自动描述两个图像集之间差异的任务,并将其称为集合差异描述。该任务以图像集 $D_A$ 和 $D_B$ 作为输入,输出一段在 $D_A$ 上比在 $D_B$ 上更真实的描述。我们提出了一种两阶段方法:首先从图像集中生成候选差异描述,然后通过检验这些描述区分两个集合的能力进行重新排序。我们引入了VisDiff方法,该方法首先对图像进行描述并提示语言模型生成候选描述,随后利用CLIP对描述重新排序。为评估VisDiff,我们构建了VisDiffBench数据集,其中包含187个配对图像集及其对应的真实差异描述。我们将VisDiff应用于多个领域,例如数据集比较(如ImageNet与ImageNetV2)、分类模型比较(如零样本CLIP与监督式ResNet)、模型失效模式总结(监督式ResNet)、生成模型差异表征(如StableDiffusionV1与V2),以及探索图像记忆性的成因。实验证明,VisDiff能够发现数据集和模型中此前未知的有趣差异,充分展示了其在揭示细微洞察方面的实用价值。