Multi-contrast super-resolution (MCSR) is crucial for enhancing MRI but current deep learning methods are limited. They typically require large, paired low- and high-resolution (LR/HR) training datasets, which are scarce, and are trained for fixed upsampling scales. While recent self-supervised methods remove the paired data requirement, they fail to leverage valuable population-level priors. In this work, we propose a novel, decoupled MCSR framework that resolves both limitations. We reformulate MCSR into two stages: (1) an unpaired cross-modal synthesis (uCMS) module, trained once on unpaired population data to learn a robust anatomical prior; and (2) a lightweight, patient-specific implicit re-representation (IrR) module. This IrR module is optimized in a self-supervised manner to fuse the population prior with the subject's own LR target data. This design uniquely fuses population-level knowledge with patient-specific fidelity without requiring any paired LR/HR or paired cross-modal training data. By building the IrR module on an implicit neural representation, our framework is also inherently scale-agnostic. Our method demonstrates superior quantitative performance on different datasets, with exceptional robustness at extreme scales (16x, 32x), a regime where competing methods fail. Our work presents a data-efficient, flexible, and computationally lightweight paradigm for MCSR, enabling high-fidelity, arbitrary-scale
翻译:多对比度超分辨率(MCSR)对于提升MRI质量至关重要,但当前深度学习方法存在局限。这些方法通常需要大量配对的低分辨率/高分辨率(LR/HR)训练数据集(此类数据稀缺),且仅针对固定上采样尺度进行训练。虽然近期自监督方法消除了配对数据需求,却未能利用宝贵的群体级先验知识。本研究提出一种新颖的解耦式MCSR框架,可同时解决这两项局限。我们将MCSR重构为两个阶段:(1)非配对跨模态合成(uCMS)模块,通过单次非配对群体数据训练学习稳健的解剖先验;(2)轻量级、患者专属的隐式重表示(IrR)模块。该IrR模块以自监督方式优化,将群体先验与受试者自身的LR目标数据相融合。此设计独特地将群体级知识与患者特异性保真度相结合,且无需任何配对的LR/HR数据或配对的跨模态训练数据。通过基于隐式神经表示构建IrR模块,我们的框架天然具备尺度无关性。实验表明,本方法在不同数据集上均展现出卓越的定量性能,在极端尺度(16倍、32倍)下具有优异鲁棒性——该尺度域现有方法均告失效。本研究为MCSR提供了一种数据高效、灵活且计算轻量的新范式,为实现高保真、任意尺度的超分辨率开辟了新路径。