This article introduces a novel approach to learning monotone neural networks through a newly defined penalization loss. The proposed method is particularly effective in solving classes of variational problems, specifically monotone inclusion problems, commonly encountered in image processing tasks. The Forward-Backward-Forward (FBF) algorithm is employed to address these problems, offering a solution even when the Lipschitz constant of the neural network is unknown. Notably, the FBF algorithm provides convergence guarantees under the condition that the learned operator is monotone. Building on plug-and-play methodologies, our objective is to apply these newly learned operators to solving non-linear inverse problems. To achieve this, we initially formulate the problem as a variational inclusion problem. Subsequently, we train a monotone neural network to approximate an operator that may not inherently be monotone. Leveraging the FBF algorithm, we then show simulation examples where the non-linear inverse problem is successfully solved.
翻译:本文提出了一种通过新定义的惩罚损失函数来学习单调神经网络的新方法。该方法在解决一类变分问题(特别是图像处理任务中常见的单调包含问题)方面尤为有效。我们采用前向-后向-前向(FBF)算法来处理这些问题,即使神经网络的Lipschitz常数未知,该算法也能提供解决方案。值得注意的是,FBF算法在学习算子为单调的条件下具有收敛保证。基于即插即用方法,我们的目标是将这些新学习的算子应用于求解非线性反问题。为此,我们首先将问题表述为变分包含问题,然后训练一个单调神经网络来逼近一个本质上可能非单调的算子。借助FBF算法,我们展示了成功求解非线性反问题的仿真示例。