The Segment Anything Model (SAM) is a deep neural network foundational model designed to perform instance segmentation which has gained significant popularity given its zero-shot segmentation ability. SAM operates by generating masks based on various input prompts such as text, bounding boxes, points, or masks, introducing a novel methodology to overcome the constraints posed by dataset-specific scarcity. While SAM is trained on an extensive dataset, comprising ~11M images, it mostly consists of natural photographic images with only very limited images from other modalities. Whilst the rapid progress in visual infrared surveillance and X-ray security screening imaging technologies, driven forward by advances in deep learning, has significantly enhanced the ability to detect, classify and segment objects with high accuracy, it is not evident if the SAM zero-shot capabilities can be transferred to such modalities. This work assesses SAM capabilities in segmenting objects of interest in the X-ray/infrared modalities. Our approach reuses the pre-trained SAM with three different prompts: bounding box, centroid and random points. We present quantitative/qualitative results to showcase the performance on selected datasets. Our results show that SAM can segment objects in the X-ray modality when given a box prompt, but its performance varies for point prompts. Specifically, SAM performs poorly in segmenting slender objects and organic materials, such as plastic bottles. We find that infrared objects are also challenging to segment with point prompts given the low-contrast nature of this modality. This study shows that while SAM demonstrates outstanding zero-shot capabilities with box prompts, its performance ranges from moderate to poor for point prompts, indicating that special consideration on the cross-modal generalisation of SAM is needed when considering use on X-ray/infrared imagery.
翻译:通用分割模型(SAM)是一个用于执行实例分割的深度神经网络基础模型,因其零样本分割能力而广受关注。SAM通过基于文本、边界框、点或掩码等多种输入提示生成掩码,引入了一种突破特定数据集稀缺性限制的新型方法论。尽管SAM在包含约1100万张图像的大规模数据集上训练,但该数据集主要由自然摄影图像构成,仅包含极少量其他模态图像。虽然深度学习驱动的红外视觉监控与X射线安检成像技术快速发展显著提升了高精度目标检测、分类和分割能力,但SAM的零样本能力能否迁移至此类模态尚不明确。本研究评估了SAM在X射线/红外模态中分割感兴趣目标的能力。我们采用预训练SAM配合三种不同提示方式:边界框、质心和随机点。通过定量/定性结果展示在选定数据集上的性能表现。实验结果表明,当使用边界框提示时,SAM能有效分割X射线模态中的目标,但点提示的性能存在差异。具体而言,SAM在分割细长物体和塑料瓶等有机材料时表现不佳。研究发现,由于红外模态的低对比度特性,使用点提示分割红外目标同样具有挑战性。本研究显示,虽然SAM在边界框提示下展现出卓越的零样本能力,但其在点提示下的性能表现中等至较差,这表明在考虑将SAM应用于X射线/红外图像时需要特别关注其跨模态泛化能力。