Segment Anything Model (SAM) has shown impressive performance in interactive and zero-shot segmentation across diverse domains, suggesting that they have learned a general concept of "objects" from their large-scale training. However, we observed that SAM struggles with certain types of objects, particularly those featuring dense, tree-like structures and low textural contrast from their surroundings. These failure modes are critical for understanding its limitations in real-world use. In order to systematically examine this issue, we propose metrics to quantify two key object characteristics: tree-likeness and textural separability. Through extensive controlled synthetic experiments and testing on real datasets, we demonstrate that SAM's performance is noticeably correlated with these factors. We link these behaviors under the concept of "textural confusion", where SAM misinterprets local structure as global texture, leading to over-segmentation, or struggles to differentiate objects from similarly textured backgrounds. These findings offer the first quantitative framework to model SAM's challenges, providing valuable insights into its limitations and guiding future improvements for vision foundation models.
翻译:Segment Anything Model (SAM)在跨领域的交互式与零样本分割任务中展现出卓越性能,表明其已通过大规模训练习得了通用的“物体”概念。然而,我们观察到SAM在处理某些特定类型的物体时存在困难,尤其是那些具有密集树状结构或与背景纹理对比度较低的物体。这些失效模式对于理解其在实际应用中的局限性至关重要。为系统性地研究该问题,我们提出了量化两个关键物体特征的指标:树状结构程度与纹理可分性。通过大量受控合成实验及真实数据集测试,我们证明SAM的性能与这些因素存在显著相关性。我们将这些现象归因于“纹理混淆”概念——SAM将局部结构误解为全局纹理导致过分割,或难以从纹理相似的背景中区分物体。这些发现首次提出了量化建模SAM挑战的框架,为理解其局限性提供了重要见解,并为视觉基础模型的未来改进指明了方向。