This paper proposes a data-driven approach for constructing firmly nonexpansive operators. We demonstrate its applicability in Plug-and-Play methods, where classical algorithms such as forward-backward splitting, Chambolle--Pock primal-dual iteration, Douglas--Rachford iteration or alternating directions method of multipliers (ADMM), are modified by replacing one proximal map by a learned firmly nonexpansive operator. We provide sound mathematical background to the problem of learning such an operator via expected and empirical risk minimization. We prove that, as the number of training points increases, the empirical risk minimization problem converges (in the sense of Gamma-convergence) to the expected risk minimization problem. Further, we derive a solution strategy that ensures firmly nonexpansive and piecewise affine operators within the convex envelope of the training set. We show that this operator converges to the best empirical solution as the number of points in the envelope increases in an appropriate sense. Finally, the experimental section details practical implementations of the method and presents an application in image denoising.
翻译:本文提出了一种数据驱动的方法来构建严格非扩张算子。我们展示了其在即插即用方法中的适用性,其中经典算法如前后向分裂、Chambolle--Pock原始对偶迭代、Douglas--Rachford迭代或交替方向乘子法,通过将其中一个邻近映射替换为学习到的严格非扩张算子进行修改。我们为通过期望和经验风险最小化学习此类算子的问题提供了坚实的数学基础。我们证明,随着训练点数量的增加,经验风险最小化问题会收敛于期望风险最小化问题。此外,我们推导出一种确保在训练集凸包内生成严格非扩张且分段仿射算子的求解策略。我们证明,随着凸包中点数量的适当增加,该算子会收敛于最佳经验解。最后,实验部分详细介绍了该方法的实际实现,并展示了其在图像去噪中的应用。