Accurate medical image segmentation is essential for effective diagnosis and treatment planning but is often challenged by domain shifts caused by variations in imaging devices, acquisition conditions, and patient-specific attributes. Traditional domain generalization methods typically require inclusion of parts of the test domain within the training set, which is not always feasible in clinical settings with limited diverse data. Additionally, although diffusion models have demonstrated strong capabilities in image generation and style transfer, they often fail to preserve the critical structural information necessary for precise medical analysis. To address these issues, we propose a novel medical image segmentation method that combines diffusion models and Structure-Preserving Network for structure-aware one-shot image stylization. Our approach effectively mitigates domain shifts by transforming images from various sources into a consistent style while maintaining the location, size, and shape of lesions. This ensures robust and accurate segmentation even when the target domain is absent from the training data. Experimental evaluations on colonoscopy polyp segmentation and skin lesion segmentation datasets show that our method enhances the robustness and accuracy of segmentation models, achieving superior performance metrics compared to baseline models without style transfer. This structure-aware stylization framework offers a practical solution for improving medical image segmentation across diverse domains, facilitating more reliable clinical diagnoses.
翻译:准确的医学图像分割对于有效诊断和治疗规划至关重要,但常因成像设备、采集条件和患者特定属性的差异导致的域偏移而面临挑战。传统的域泛化方法通常需要在训练集中包含测试域的部分数据,这在临床环境中因多样性数据有限而往往不可行。此外,尽管扩散模型在图像生成和风格迁移方面展现出强大能力,却常无法保留精确医学分析所需的关键结构信息。为解决这些问题,我们提出一种新颖的医学图像分割方法,该方法结合扩散模型与结构保持网络,实现结构感知的单样本图像风格化。我们的方法通过将来自不同来源的图像转换为一致风格,同时保持病灶的位置、大小和形状,有效缓解域偏移。这确保了即使目标域未出现在训练数据中,也能实现鲁棒且准确的分割。在结肠镜息肉分割和皮肤病变分割数据集上的实验评估表明,我们的方法提升了分割模型的鲁棒性和准确性,与未进行风格迁移的基线模型相比,获得了更优的性能指标。该结构感知风格化框架为改进跨域医学图像分割提供了实用解决方案,有助于实现更可靠的临床诊断。