Plug-and-Play Priors (PnP) is a well-known class of methods for solving inverse problems in computational imaging. PnP methods combine physical forward models with learned prior models specified as image denoisers. A common issue with the learned models is that of a performance drop when there is a distribution shift between the training and testing data. Test-time training (TTT) was recently proposed as a general strategy for improving the performance of learned models when training and testing data come from different distributions. In this paper, we propose PnP-TTT as a new method for overcoming distribution shifts in PnP. PnP-TTT uses deep equilibrium learning (DEQ) for optimizing a self-supervised loss at the fixed points of PnP iterations. PnP-TTT can be directly applied on a single test sample to improve the generalization of PnP. We show through simulations that given a sufficient number of measurements, PnP-TTT enables the use of image priors trained on natural images for image reconstruction in magnetic resonance imaging (MRI).
翻译:即插即用先验(PnP)是计算成像领域解决逆问题的一类经典方法。PnP方法将物理正向模型与以图像去噪器形式指定的学习先验模型相结合。学习模型的一个常见问题是,当训练数据与测试数据存在分布偏移时会出现性能下降。测试时训练(TTT)最近被提出作为一种通用策略,用于提升训练数据与测试数据来自不同分布时学习模型的性能。本文提出PnP-TTT这一新方法来克服PnP中的分布偏移。PnP-TTT利用深度平衡学习(DEQ)在PnP迭代的固定点处优化自监督损失函数。PnP-TTT可直接应用于单个测试样本,以改善PnP的泛化能力。仿真结果表明,在测量值充足的情况下,PnP-TTT能够使基于自然图像训练的图像先验直接用于磁共振成像(MRI)的重建。