We tackle the challenging problem of single-source domain generalization (DG) for medical image segmentation, where we train a network on one domain (e.g., CT) and directly apply it to a different domain (e.g., MR) without adapting the model and without requiring images or annotations from the new domain during training. Our method diversifies the source domain through semantic-aware random convolution, where different regions of a source image are augmented differently at training-time, based on their annotation labels. At test-time, we complement the randomization of the training domain via mapping the intensity of target domain images, making them similar to source domain data. We perform a comprehensive evaluation on a variety of cross-modality and cross-center generalization settings for abdominal, whole-heart and prostate segmentation, where we outperform previous DG techniques in a vast majority of experiments. Additionally, we also investigate our method when training on whole-heart CT or MR data and testing on the diastolic and systolic phase of cine MR data captured with different scanner hardware. Overall, our evaluation shows that our method achieves new state-of-the-art performance in DG for medical image segmentation, even matching the performance of the in-domain baseline in several settings. We will release our source code upon acceptance of this manuscript.
翻译:本文针对医学图像分割中的单源域泛化这一挑战性问题展开研究,在该任务中,我们仅在一个域(如CT)上训练网络,随后无需模型适应、也无需在训练阶段使用新域的任何图像或标注,即可直接将其应用于不同域(如MR)的数据。我们的方法通过语义感知随机卷积来增强源域的多样性,该操作在训练时依据图像各区域的标注标签,对源图像的不同区域施加差异化的数据增强。在测试阶段,我们通过对目标域图像的强度进行映射,使其与源域数据相似,从而补充训练阶段的随机化效果。我们在腹部、全心及前列腺分割的多种跨模态与跨中心泛化场景中进行了全面评估,结果表明在绝大多数实验中,本方法均优于以往的域泛化技术。此外,我们还探究了在全心CT或MR数据上训练,并在不同扫描硬件采集的电影MR数据的舒张期与收缩期图像上进行测试的场景。总体而言,我们的评估表明本方法在医学图像分割的域泛化任务中取得了新的最优性能,在多个设定下甚至达到了域内基线的性能水平。本文稿件一经录用,我们将公开源代码。