The inductive bias of the convolutional neural network (CNN) can be a strong prior for image restoration, which is known as the Deep Image Prior (DIP). Recently, DIP is utilized in unsupervised dynamic MRI reconstruction, which adopts a generative model from the latent space to the image space. However, existing methods usually use a pyramid-shaped CNN generator shared by all frames, embedding the temporal modeling within the latent space, which may hamper the model expression capability. In this work, we propose a novel scheme for dynamic MRI representation, named ``Graph Image Prior'' (GIP). GIP adopts a two-stage generative network in a new modeling methodology, which first employs independent CNNs to recover the image structure for each frame, and then exploits the spatio-temporal correlations within the feature space parameterized by a graph model. A graph convolutional network is utilized for feature fusion and dynamic image generation. In addition, we devise an ADMM algorithm to alternately optimize the images and the network parameters to improve the reconstruction performance. Experiments were conducted on cardiac cine MRI reconstruction, which demonstrate that GIP outperforms compressed sensing methods and other DIP-based unsupervised methods, significantly reducing the performance gap with state-of-the-art supervised algorithms. Moreover, GIP displays superior generalization ability when transferred to a different reconstruction setting, without the need for any additional data.
翻译:卷积神经网络(CNN)的归纳偏置可作为图像复原的强大先验,即深度图像先验(DIP)。近期,DIP被应用于无监督动态MRI重建,采用从隐空间到图像空间的生成模型。然而,现有方法通常使用所有帧共享的金字塔形CNN生成器,将时序建模嵌入隐空间内,这可能限制模型表达能力。本研究提出一种名为“图图像先验”(GIP)的动态MRI表示新框架。GIP采用两阶段生成网络的新建模方法:首先利用独立CNN恢复每帧图像结构,随后通过图模型参数化的特征空间挖掘时空相关性。采用图卷积网络进行特征融合与动态图像生成。此外,我们设计了ADMM算法交替优化图像与网络参数以提升重建性能。在心脏电影MRI重建实验表明,GIP优于压缩感知方法及其他基于DIP的无监督方法,显著缩小了与最先进监督算法的性能差距。更值得注意的是,GIP在迁移至不同重建场景时展现出卓越的泛化能力,且无需任何额外数据。