Since multiple MRI contrasts of the same anatomy contain redundant information, one contrast can guide the reconstruction of an undersampled subsequent contrast. To this end, several end-to-end learning-based guided reconstruction methods have been proposed. However, a key challenge is the requirement of large paired training datasets comprising raw data and aligned reference images. We propose a modular two-stage approach that does not require any k-space training data, relying solely on image-domain datasets, a large part of which can be unpaired. Additionally, our approach provides an explanatory framework for the multi-contrast problem based on the shared and non-shared generative factors underlying two given contrasts. A content/style model of two-contrast image data is learned from a largely unpaired image-domain dataset and is subsequently applied as a plug-and-play operator in iterative reconstruction. The disentanglement of content and style allows explicit representation of contrast-independent and contrast-specific factors. Consequently, incorporating prior information into the reconstruction reduces to a simple replacement of the aliased content of the reconstruction iterate with high-quality content derived from the reference scan. Combining this component with a data consistency step and introducing a general corrective process for the content yields an iterative scheme. We name this novel approach PnP-CoSMo. Various aspects like interpretability and convergence are explored via simulations. Furthermore, its practicality is demonstrated on the public NYU fastMRI DICOM dataset, showing improved generalizability compared to end-to-end methods, and on two in-house multi-coil raw datasets, offering up to 32.6\% more acceleration over learning-based non-guided reconstruction for a given SSIM.
翻译:由于同一解剖结构的多个磁共振成像对比度包含冗余信息,一个对比度可以引导后续欠采样对比度的重建。为此,已有若干基于端到端学习的引导式重建方法被提出。然而,一个关键挑战在于需要包含原始数据与对齐参考图像的大规模配对训练数据集。我们提出一种模块化的两阶段方法,该方法无需任何k空间训练数据,仅依赖图像域数据集,且其中大部分数据可以是非配对的。此外,我们的方法基于两个给定对比度背后共享与非共享的生成因子,为多对比度问题提供了一个解释性框架。我们首先从大规模非配对的图像域数据集中学习双对比度图像数据的内容/风格模型,随后将其作为即插即用算子应用于迭代重建过程。内容与风格解耦使得对比度无关因子与对比度特定因子得以显式表征。因此,将先验信息融入重建过程简化为:用参考扫描获得的高质量内容直接替换重建迭代中混叠的内容分量。将此组件与数据一致性步骤相结合,并引入针对内容的通用校正过程,即可构建迭代方案。我们将这一新方法命名为PnP-CoSMo。通过仿真实验,我们探讨了其可解释性、收敛性等多方面特性。进一步地,在公开的NYU fastMRI DICOM数据集上验证了其实用性,结果显示相较于端到端方法具有更好的泛化能力;在两个内部多线圈原始数据集上的实验表明,在给定结构相似性指标下,相比基于学习的非引导重建方法可实现高达32.6%的加速增益。