Large-scale foundation models have become the mainstream deep learning method, while in civil engineering, the scale of AI models is strictly limited. In this work, a vision foundation model is introduced for crack segmentation. Two parameter-efficient fine-tuning methods, adapter and low-rank adaptation, are adopted to fine-tune the foundation model in semantic segmentation: the Segment Anything Model (SAM). The fine-tuned CrackSAM shows excellent performance on different scenes and materials. To test the zero-shot performance of the proposed method, two unique datasets related to road and exterior wall cracks are collected, annotated and open-sourced, for a total of 810 images. Comparative experiments are conducted with twelve mature semantic segmentation models. On datasets with artificial noise and previously unseen datasets, the performance of CrackSAM far exceeds that of all state-of-the-art models. CrackSAM exhibits remarkable superiority, particularly under challenging conditions such as dim lighting, shadows, road markings, construction joints, and other interference factors. These cross-scenario results demonstrate the outstanding zero-shot capability of foundation models and provide new ideas for developing vision models in civil engineering.
翻译:大规模基础模型已成为主流深度学习方法,但在土木工程领域,人工智能模型的规模受到严格限制。本研究引入了一种用于裂缝分割的视觉基础模型,采用适配器与低秩适应两种参数高效微调方法,对语义分割基础模型(Segment Anything Model, SAM)进行微调。微调后的CrackSAM在不同场景和材料上展现出卓越性能。为测试该方法的零样本性能,我们收集、标注并开源了两个分别涉及道路和外墙裂缝的独特数据集,共包含810张图像。与十二种成熟的语义分割模型进行对比实验后,在含有人工噪声的数据集及未见过的数据集上,CrackSAM的性能远超所有现有最优模型。尤其在弱光、阴影、道路标线、施工缝等干扰因素下,CrackSAM表现出显著优势。这些跨场景结果证明了基础模型出色的零样本能力,并为土木工程视觉模型开发提供了新思路。