Vision-based crack detection faces deployment challenges due to the size of robust models and edge device limitations. These can be addressed with lightweight models trained with knowledge distillation (KD). However, state-of-the-art (SOTA) KD methods compromise anti-noise robustness. This paper develops Robust Feature Knowledge Distillation (RFKD), a framework to improve robustness while retaining the precision of light models for crack segmentation. RFKD distils knowledge from a teacher model's logit layers and intermediate feature maps while leveraging mixed clean and noisy images to transfer robust patterns to the student model, improving its precision, generalisation, and anti-noise performance. To validate the proposed RFKD, a lightweight crack segmentation model, PoolingCrack Tiny (PCT), with only 0.5 M parameters, is also designed and used as the student to run the framework. The results show a significant enhancement in noisy images, with RFKD reaching a 62% enhanced mean Dice score (mDS) compared to SOTA KD methods.
翻译:基于视觉的裂缝检测因鲁棒模型体积庞大与边缘设备资源受限而面临部署挑战。通过知识蒸馏训练轻量化模型可有效应对这些问题,然而当前最先进的知识蒸馏方法会牺牲抗噪声鲁棒性。本文提出鲁棒特征知识蒸馏框架,旨在提升轻量化裂缝分割模型鲁棒性的同时保持其精度。该框架通过蒸馏教师模型的logit层与中间特征图知识,并融合混合干净与带噪图像,将鲁棒模式迁移至学生模型,从而提升其精度、泛化能力与抗噪声性能。为验证所提方法,本文同时设计了一个仅含0.5M参数的轻量化裂缝分割模型PoolingCrack Tiny作为学生模型进行框架测试。结果表明,在噪声图像中,RFKD相较当前最优知识蒸馏方法实现了62%的Dice分数均值提升。