Multi-modal medical images provide complementary soft-tissue characteristics that aid in the screening and diagnosis of diseases. However, limited scanning time, image corruption and various imaging protocols often result in incomplete multi-modal images, thus limiting the usage of multi-modal data for clinical purposes. To address this issue, in this paper, we propose a novel unified multi-modal image synthesis method for missing modality imputation. Our method overall takes a generative adversarial architecture, which aims to synthesize missing modalities from any combination of available ones with a single model. To this end, we specifically design a Commonality- and Discrepancy-Sensitive Encoder for the generator to exploit both modality-invariant and specific information contained in input modalities. The incorporation of both types of information facilitates the generation of images with consistent anatomy and realistic details of the desired distribution. Besides, we propose a Dynamic Feature Unification Module to integrate information from a varying number of available modalities, which enables the network to be robust to random missing modalities. The module performs both hard integration and soft integration, ensuring the effectiveness of feature combination while avoiding information loss. Verified on two public multi-modal magnetic resonance datasets, the proposed method is effective in handling various synthesis tasks and shows superior performance compared to previous methods.
翻译:多模态医学图像提供了互补的软组织特征,有助于疾病的筛查与诊断。然而,有限的扫描时间、图像损坏以及多样化的成像协议常导致多模态图像不完整,从而限制了多模态数据在临床中的应用。为解决此问题,本文提出了一种新颖的统一多模态图像合成方法用于缺失模态填补。我们的方法整体采用生成对抗网络架构,旨在通过单一模型从任意可用模态组合中合成缺失模态。为此,我们专门为生成器设计了一个共性-差异敏感编码器,以充分利用输入模态中包含的模态不变信息与模态特定信息。融合这两类信息有助于生成具有一致解剖结构且符合目标分布真实细节的图像。此外,我们提出了动态特征统一模块,用于整合来自可变数量可用模态的信息,使网络对随机缺失模态具有鲁棒性。该模块同时执行硬性整合与软性整合,在确保特征组合有效性的同时避免信息损失。在两个公开多模态磁共振数据集上的验证表明,所提方法能有效处理多种合成任务,并展现出优于已有方法的性能。