We consider a neural network architecture designed to solve inverse problems where the degradation operator is linear and known. This architecture is constructed by unrolling a forward-backward algorithm derived from the minimization of an objective function that combines a data-fidelity term, a Tikhonov-type regularization term, and a potentially nonsmooth convex penalty. The robustness of this inversion method to input perturbations is analyzed theoretically. Ensuring robustness complies with the principles of inverse problem theory, as it ensures both the continuity of the inversion method and the resilience to small noise - a critical property given the known vulnerability of deep neural networks to adversarial perturbations. A key novelty of our work lies in examining the robustness of the proposed network to perturbations in its bias, which represents the observed data in the inverse problem. Additionally, we provide numerical illustrations of the analytical Lipschitz bounds derived in our analysis.
翻译:本文研究一种专为求解逆问题设计的神经网络架构,其中退化算子为线性且已知。该架构通过展开前向-后向算法构建,该算法源自对目标函数的最小化过程,该目标函数结合了数据保真项、Tikhonov型正则化项以及可能非光滑的凸惩罚项。我们从理论上分析了该逆问题求解方法对输入扰动的鲁棒性。确保鲁棒性符合逆问题理论的基本原则,因为它既保证了逆问题求解方法的连续性,又确保了其对微小噪声的抵抗能力——鉴于深度神经网络对对抗性扰动存在已知脆弱性,这一性质尤为重要。本工作的一个关键创新点在于考察所提出网络对其偏置项扰动的鲁棒性,该偏置项在逆问题中代表观测数据。此外,我们通过数值算例展示了分析中推导出的Lipschitz界。