Variational regularisation is the primary method for solving inverse problems, and recently there has been considerable work leveraging deeply learned regularisation for enhanced performance. However, few results exist addressing the convergence of such regularisation, particularly within the context of critical points as opposed to global minimisers. In this paper, we present a generalised formulation of convergent regularisation in terms of critical points, and show that this is achieved by a class of weakly convex regularisers. We prove convergence of the primal-dual hybrid gradient method for the associated variational problem, and, given a Kurdyka-Lojasiewicz condition, an $\mathcal{O}(\log{k}/k)$ ergodic convergence rate. Finally, applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks (IWCNN), and show empirically that IWCNNs can lead to improved performance of learned adversarial regularisers for computed tomography (CT) reconstruction.
翻译:变分正则化是求解逆问题的主要方法,近年来利用深度学习方法进行正则化以提升性能的研究日益增多。然而,针对此类正则化收敛性的研究成果较少,尤其是在临界点(而非全局极小值)的框架下。本文提出了一种基于临界点的收敛正则化广义表述,并证明一类弱凸正则化器可实现该收敛性。我们证明了原始-对偶混合梯度法在相关变分问题中的收敛性,并在满足Kurdyka-Lojasiewicz条件时,获得了$\mathcal{O}(\log{k}/k)$的遍历收敛速率。最后,将该理论应用于学习型正则化,我们证明了输入弱凸神经网络(IWCNN)的通用逼近能力,并通过实验表明IWCNN能提升学习型对抗正则化器在计算机断层扫描(CT)重建中的性能。