We propose a regularization scheme for image reconstruction that leverages the power of deep learning while hinging on classic sparsity-promoting models. Many deep-learning-based models are hard to interpret and cumbersome to analyze theoretically. In contrast, our scheme is interpretable because it corresponds to the minimization of a series of convex problems. For each problem in the series, a mask is generated based on the previous solution to refine the regularization strength spatially. In this way, the model becomes progressively attentive to the image structure. For the underlying update operator, we prove the existence of a fixed point. As a special case, we investigate a mask generator for which the fixed-point iterations converge to a critical point of an explicit energy functional. In our experiments, we match the performance of state-of-the-art learned variational models for the solution of inverse problems. Additionally, we offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.
翻译:我们提出一种图像重建的正则化方案,该方案既利用了深度学习的强大能力,又立足于经典的稀疏性促进模型。许多基于深度学习的模型难以解释,且理论分析繁琐。相比之下,我们的方案是可解释的,因为它对应于一系列凸优化问题的求解。在该系列中的每个问题中,会基于前一步的解生成一个掩码,以在空间上细化正则化强度。通过这种方式,模型逐渐对图像结构产生注意力。对于底层更新算子,我们证明了不动点的存在性。作为一个特例,我们研究了一种掩码生成器,其不动点迭代收敛于一个显式能量泛函的临界点。在实验中,我们的方法在求解逆问题时达到了最先进学习型变分模型的性能。此外,我们在可解释性、理论保证、可靠性和性能之间提供了有前景的平衡。