This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of $B$. We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, showcasing its flexibility in incorporating prior knowledge into the regularization framework. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions.
翻译:本文提出了一种新颖的方法,用于学习稀疏促进正则化器以解决线性逆问题。我们开发了一个双层优化框架来选择最优合成算子(记为 $B$),该算子在正则化逆问题的同时促进解的稀疏性。该方法利用了底层数据的统计特性,并通过 $B$ 的选择融入了先验知识。我们建立了优化问题的适定性,为学习过程提供了理论保证,并给出了样本复杂度界限。通过示例展示了该方法的应用,包括已知算子的紧扰动和学习母小波的问题,展示了其在将先验知识融入正则化框架方面的灵活性。这项工作通过处理不可微范数并提出无限维空间中稀疏正则化的数据驱动方法,扩展了先前在 Tikhonov 正则化方面的研究。