Foundation models, pre-trained on a large amount of data have demonstrated impressive zero-shot capabilities in various downstream tasks. However, in object detection and instance segmentation, two fundamental computer vision tasks heavily reliant on extensive human annotations, foundation models such as SAM and DINO struggle to achieve satisfactory performance. In this study, we reveal that the devil is in the object boundary, \textit{i.e.}, these foundation models fail to discern boundaries between individual objects. For the first time, we probe that CLIP, which has never accessed any instance-level annotations, can provide a highly beneficial and strong instance-level boundary prior in the clustering results of its particular intermediate layer. Following this surprising observation, we propose $\textbf{Zip}$ which $\textbf{Z}$ips up CL$\textbf{ip}$ and SAM in a novel classification-first-then-discovery pipeline, enabling annotation-free, complex-scene-capable, open-vocabulary object detection and instance segmentation. Our Zip significantly boosts SAM's mask AP on COCO dataset by 12.5% and establishes state-of-the-art performance in various settings, including training-free, self-training, and label-efficient finetuning. Furthermore, annotation-free Zip even achieves comparable performance to the best-performing open-vocabulary object detecters using base annotations. Code is released at https://github.com/ChengShiest/Zip-Your-CLIP
翻译:基础模型在大量数据上预训练后,在下游任务中展现出令人瞩目的零样本能力。然而,在目标检测与实例分割这两个高度依赖人工标注的计算机视觉基础任务中,SAM和DINO等基础模型难以取得令人满意的性能。本研究中,我们揭示玄妙尽在物体边界——即这些基础模型无法分辨个体物体之间的边界。我们首次发现,从未接触过任何实例级标注的CLIP模型,在其特定中间层的聚类结果中,能提供高度有效且强大的实例级边界先验信息。基于这一惊人发现,我们提出Zip方法,即通过新颖的分类优先再发现流程将CLIP与SAM结合,实现免标注、复杂场景适应、开放词汇的目标检测与实例分割。我们的Zip在COCO数据集上将SAM的掩膜AP显著提升12.5%,并在无训练、自训练及标签高效微调等多种设定下均达到顶尖性能。此外,免标注的Zip甚至能在使用基础标注的情况下,与表现最佳的开放词汇目标检测器相媲美。代码已发布于https://github.com/ChengShiest/Zip-Your-CLIP。