The rapid development of deep learning has driven significant progress in image semantic segmentation - a fundamental task in computer vision. Semantic segmentation algorithms often depend on the availability of pixel-level labels (i.e., masks of objects), which are expensive, time-consuming, and labor-intensive. Weakly-supervised semantic segmentation (WSSS) is an effective solution to avoid such labeling. It utilizes only partial or incomplete annotations and provides a cost-effective alternative to fully-supervised semantic segmentation. In this journal, our focus is on the WSSS with image-level labels, which is the most challenging form of WSSS. Our work has two parts. First, we conduct a comprehensive survey on traditional methods, primarily focusing on those presented at premier research conferences. We categorize them into four groups based on where their methods operate: pixel-wise, image-wise, cross-image, and external data. Second, we investigate the applicability of visual foundation models, such as the Segment Anything Model (SAM), in the context of WSSS. We scrutinize SAM in two intriguing scenarios: text prompting and zero-shot learning. We provide insights into the potential and challenges of deploying visual foundational models for WSSS, facilitating future developments in this exciting research area.
翻译:深度学习的快速发展推动了图像语义分割——计算机视觉中的一项基础任务——取得显著进展。语义分割算法通常依赖于像素级标签(即物体掩码)的可用性,而这些标签的获取成本高昂、耗时且费力。弱监督语义分割(WSSS)是避免此类标注的有效解决方案。它仅利用部分或不完整的标注,为全监督语义分割提供了一种经济高效的替代方案。本文聚焦于最具挑战性的WSSS形式——基于图像级标签的WSSS。我们的工作包含两部分。首先,我们对传统方法进行了全面综述,主要关注在顶级研究会议上提出的方法。根据其方法操作的位置,我们将它们分为四类:像素级、图像级、跨图像级和外部数据级。其次,我们研究了视觉基础模型(如Segment Anything Model(SAM))在WSSS背景下的适用性。我们在两个有趣的场景中详细审视了SAM:文本提示和零样本学习。我们深入探讨了将视觉基础模型应用于WSSS的潜力与挑战,以促进这一激动人心的研究领域未来的发展。