This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of $B$. We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through theoretical infinite-dimensional examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, and through extensive numerical simulations. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions.
翻译:本文提出了一种学习稀疏促进正则化器以解决线性逆问题的新方法。我们建立了一个双层优化框架来选择最优合成算子(记为$B$),该算子在正则化逆问题的同时促进解的稀疏性。该方法利用了底层数据的统计特性,并通过$B$的选择融入了先验知识。我们证明了优化问题的适定性,为学习过程提供了理论保证,并给出了样本复杂度界。通过理论无限维示例(包括已知算子的紧扰动和学习母小波问题)以及广泛的数值模拟,验证了该方法的有效性。此项工作通过处理不可微范数并提出无限维稀疏正则化的数据驱动方法,扩展了先前在Tikhonov正则化方面的研究。