We present a semi-supervised domain adaptation framework for brain vessel segmentation from different image modalities. Existing state-of-the-art methods focus on a single modality, despite the wide range of available cerebrovascular imaging techniques. This can lead to significant distribution shifts that negatively impact the generalization across modalities. By relying on annotated angiographies and a limited number of annotated venographies, our framework accomplishes image-to-image translation and semantic segmentation, leveraging a disentangled and semantically rich latent space to represent heterogeneous data and perform image-level adaptation from source to target domains. Moreover, we reduce the typical complexity of cycle-based architectures and minimize the use of adversarial training, which allows us to build an efficient and intuitive model with stable training. We evaluate our method on magnetic resonance angiographies and venographies. While achieving state-of-the-art performance in the source domain, our method attains a Dice score coefficient in the target domain that is only 8.9% lower, highlighting its promising potential for robust cerebrovascular image segmentation across different modalities.
翻译:我们提出了一种面向不同影像模态脑血管分割的半监督域自适应框架。现有最先进方法仅关注单一模态,忽视了当前可用的多种脑血管成像技术。这种局限性可能导致显著的分布漂移,进而影响模型在不同模态间的泛化能力。通过利用标注的血管造影图像和少量标注的静脉造影图像,本框架实现了图像到图像的翻译与语义分割,借助解耦且语义丰富的隐空间对异质数据进行表征,并完成从源域到目标域的图像级自适应。此外,我们降低了基于循环架构的典型复杂度,并最小化对抗训练的使用,从而构建出高效、直观且训练稳定的模型。我们在磁共振血管造影和静脉造影数据上进行了评估。在保持源域最先进性能的同时,本方法在目标域获得的Dice系数仅下降8.9%,展现出其在不同模态间实现鲁棒性脑血管图像分割的巨大潜力。