While large models have achieved significant progress in computer vision, challenges such as optimization complexity, the intricacy of transformer architectures, computational constraints, and practical application demands highlight the importance of simpler model designs in medical image segmentation. This need is particularly pronounced in mobile medical devices, which require lightweight, deployable models with real-time performance. However, existing lightweight models often suffer from poor robustness across datasets, limiting their widespread adoption. To address these challenges, this paper introduces LV-UNet, a lightweight and vanilla model that leverages pre-trained MobileNetv3-Large backbones and incorporates fusible modules. LV-UNet employs an enhanced deep training strategy and switches to a deployment mode during inference by re-parametrization, significantly reducing parameter count and computational overhead. Experimental results on ISIC 2016, BUSI, CVC-ClinicDB, CVC-ColonDB, and Kvair-SEG datasets demonstrate a better trade-off between performance and the computational load. The code will be released at \url{https://github.com/juntaoJianggavin/LV-UNet}.
翻译:尽管大型模型在计算机视觉领域取得了显著进展,但优化复杂性、Transformer架构的复杂性、计算限制以及实际应用需求等挑战,凸显了在医学图像分割中设计更简单模型的重要性。这一需求在移动医疗设备中尤为突出,此类设备需要具备实时性能的轻量级可部署模型。然而,现有的轻量级模型通常存在跨数据集鲁棒性较差的问题,限制了其广泛应用。为应对这些挑战,本文提出了LV-UNet,一种轻量级的基础模型,它利用预训练的MobileNetv3-Large骨干网络并融合了可融合模块。LV-UNet采用增强的深度训练策略,并在推理时通过重参数化切换到部署模式,从而显著减少了参数量和计算开销。在ISIC 2016、BUSI、CVC-ClinicDB、CVC-ColonDB和Kvair-SEG数据集上的实验结果表明,该模型在性能和计算负载之间取得了更好的平衡。代码将在 \url{https://github.com/juntaoJianggavin/LV-UNet} 发布。