Segmentation of cardiac magnetic resonance images (MRI) is crucial for the analysis and assessment of cardiac function, helping to diagnose and treat various cardiovascular diseases. Most recent techniques rely on deep learning and usually require an extensive amount of labeled data. To overcome this problem, few-shot learning has the capability of reducing data dependency on labeled data. In this work, we introduce a new method that merges few-shot learning with a U-Net architecture and Gaussian Process Emulators (GPEs), enhancing data integration from a support set for improved performance. GPEs are trained to learn the relation between the support images and the corresponding masks in latent space, facilitating the segmentation of unseen query images given only a small labeled support set at inference. We test our model with the M&Ms-2 public dataset to assess its ability to segment the heart in cardiac magnetic resonance imaging from different orientations, and compare it with state-of-the-art unsupervised and few-shot methods. Our architecture shows higher DICE coefficients compared to these methods, especially in the more challenging setups where the size of the support set is considerably small.
翻译:心脏磁共振图像(MRI)分割对于心脏功能的分析与评估至关重要,有助于诊断和治疗多种心血管疾病。当前主流技术多依赖于深度学习,通常需要大量标注数据。为解决这一问题,少样本学习能够降低对标注数据的依赖性。本研究提出一种新方法,将少样本学习与U-Net架构及高斯过程仿真器(GPEs)相结合,通过增强支持集的数据整合来提升性能。GPEs被训练用于学习潜在空间中支持图像与对应掩码之间的关系,从而在推理时仅需少量标注支持集即可实现未见查询图像的分割。我们使用M&Ms-2公开数据集测试模型,评估其在不同切面方向的心脏磁共振成像中分割心脏的能力,并与当前最先进的无监督及少样本方法进行比较。实验表明,相较于这些方法,我们的架构取得了更高的DICE系数,尤其在支持集规模极小的更具挑战性场景中表现更为突出。