Unfolded proximal neural networks (PNNs) form a family of methods that combines deep learning and proximal optimization approaches. They consist in designing a neural network for a specific task by unrolling a proximal algorithm for a fixed number of iterations, where linearities can be learned from prior training procedure. PNNs have shown to be more robust than traditional deep learning approaches while reaching at least as good performances, in particular in computational imaging. However, training PNNs still depends on the efficiency of available training algorithms. In this work, we propose a lifted training formulation based on Bregman distances for unfolded PNNs. Leveraging the deterministic mini-batch block-coordinate forward-backward method, we design a bespoke computational strategy beyond traditional back-propagation methods for solving the resulting learning problem efficiently. We assess the behaviour of the proposed training approach for PNNs through numerical simulations on image denoising, considering a denoising PNN whose structure is based on dual proximal-gradient iterations.
翻译:展开近端神经网络(PNNs)是一类融合深度学习与近端优化方法的技术体系。其核心思想通过将固定迭代次数的近端算法展开,构建面向特定任务的神经网络架构,其中线性变换可通过先验训练过程学习得到。相较于传统深度学习方法,PNNs在保持至少相当性能的同时展现出更强的鲁棒性,在计算成像领域表现尤为突出。然而,PNNs的训练效果仍受限于现有训练算法的效率。本研究针对展开式PNNs提出一种基于Bregman距离的提升训练框架。通过采用确定性小批量块坐标前向-后向算法,我们设计了一套超越传统反向传播方法的定制化计算策略,以高效求解由此产生的学习问题。我们通过图像去噪数值实验评估所提训练方法的性能,实验采用的去噪PNN结构基于对偶近端梯度迭代算法构建。