Multi-contrast super-resolution (MCSR) is crucial for enhancing MRI but current deep learning methods are limited. They typically require large, paired low- and high-resolution (LR/HR) training datasets, which are scarce, and are trained for fixed upsampling scales. While recent self-supervised methods remove the paired data requirement, they fail to leverage valuable population-level priors. In this work, we propose a novel, decoupled MCSR framework that resolves both limitations. We reformulate MCSR into two stages: (1) an unpaired cross-modal synthesis (uCMS) module, trained once on unpaired population data to learn a robust anatomical prior; and (2) a lightweight, patient-specific implicit re-representation (IrR) module. This IrR module is optimized in a self-supervised manner to fuse the population prior with the subject's own LR target data. This design uniquely fuses population-level knowledge with patient-specific fidelity without requiring any paired LR/HR or paired cross-modal training data. By building the IrR module on an implicit neural representation, our framework is also inherently scale-agnostic. Our method demonstrates superior quantitative performance on different datasets, with exceptional robustness at extreme scales (16x, 32x), a regime where competing methods fail. Our work presents a data-efficient, flexible, and computationally lightweight paradigm for MCSR, enabling high-fidelity, arbitrary-scale
翻译:多对比度超分辨率(MCSR)对于提升磁共振成像质量至关重要,但现有深度学习方法存在局限。这些方法通常需要大量配对的低分辨率/高分辨率(LR/HR)训练数据集,而此类数据稀缺,且仅针对固定上采样尺度进行训练。尽管近期自监督方法消除了对配对数据的需求,却未能利用宝贵的群体级先验信息。本研究提出一种新颖的解耦式MCSR框架,可同时解决这两大局限。我们将MCSR重构为两个阶段:(1)无配对跨模态合成(uCMS)模块,通过单次无配对群体数据训练学习稳健的解剖先验;(2)轻量级患者特异性隐式重表征(IrR)模块。该IrR模块以自监督方式优化,将群体先验与受试者自身的LR目标数据相融合。此设计独特地实现了群体级知识与患者特异性保真度的结合,且无需任何配对的LR/HR数据或配对的跨模态训练数据。通过将IrR模块构建于隐式神经表征之上,我们的框架天然具备尺度无关性。实验表明,本方法在不同数据集上均表现出卓越的定量性能,在极端尺度(16倍、32倍)下具有优异鲁棒性——这正是现有竞争方法失效的范畴。本研究为MCSR提供了一种数据高效、灵活且计算轻量的新范式,能够实现高保真度的任意尺度超分辨率重建。