In this work, we provide a new convergence theory for plug-and-play proximal gradient descent (PnP-PGD) under prior mismatch where the denoiser is trained on a different data distribution to the inference task at hand. To the best of our knowledge, this is the first convergence proof of PnP-PGD under prior mismatch. Compared with the existing theoretical results for PnP algorithms, our new results removed the need for several restrictive and unverifiable assumptions. Moreover, we derive the convergence theory for equivariant PnP (EPnP) under the prior mismatch setting, proving that EPnP reduces error variance and explicitly tightens the convergence bound.
翻译:本文针对即插即用近端梯度下降方法在先验失配场景下的收敛性提出了新的理论分析框架,其中去噪器训练数据分布与实际推理任务的数据分布存在差异。据我们所知,这是首个在先验失配条件下对PnP-PGD收敛性的严格证明。与现有PnP算法的理论成果相比,我们的新结论消除了若干具有限制性且无法验证的假设条件。此外,我们推导了等变即插即用方法在先验失配设定下的收敛理论,证明EPnP能够有效降低误差方差并显式收紧收敛界。