Can regularization terms in the training of invertible neural networks lead to known Bayesian point estimators in reconstruction? Invertible networks are attractive for inverse problems due to their inherent stability and interpretability. Recently, optimization strategies for invertible neural networks that approximate either a reconstruction map or the forward operator have been studied from a Bayesian perspective, but each has limitations. To address this, we introduce and analyze two regularization terms for the network training that, upon inversion of the network, recover properties of classical Bayesian point estimators: while the first can be connected to the posterior mean, the second resembles the MAP estimator. Our theoretical analysis characterizes how each loss shapes both the learned forward operator and its inverse reconstruction map. Numerical experiments support our findings and demonstrate how these loss-term regularizers introduce data-dependence in a stable and interpretable way.
翻译:可逆神经网络训练中的正则化项能否在重建问题中导出已知的贝叶斯点估计器?可逆网络因其固有的稳定性和可解释性在逆问题中备受关注。近期,从贝叶斯视角研究了近似重建映射或前向算子的可逆神经网络优化策略,但各自存在局限。为此,我们提出并分析了两种网络训练正则化项,当网络反转时能恢复经典贝叶斯点估计器的特性:第一种可与后验均值关联,第二种则类似于最大后验概率估计器。理论分析揭示了每种损失函数如何塑造学习得到的前向算子及其逆重建映射。数值实验支持了我们的结论,并证明这些损失函数正则化器能以稳定且可解释的方式引入数据依赖性。