$\ell_1$ regularization is used to preserve edges or enforce sparsity in a solution to an inverse problem. We investigate the Split Bregman and the Majorization-Minimization iterative methods that turn this non-smooth minimization problem into a sequence of steps that include solving an $\ell_2$-regularized minimization problem. We consider selecting the regularization parameter in the inner generalized Tikhonov regularization problems that occur at each iteration in these $\ell_1$ iterative methods. The generalized cross validation and $\chi^2$ degrees of freedom methods are extended to these inner problems. In particular, for the $\chi^2$ method this includes extending the $\chi^2$ result for problems in which the regularization operator has more rows than columns, and showing how to use the $A-$weighted generalized inverse to estimate prior information at each inner iteration. Numerical experiments for image deblurring problems demonstrate that it is more effective to select the regularization parameter automatically within the iterative schemes than to keep it fixed for all iterations. Moreover, an appropriate regularization parameter can be estimated in the early iterations and used fixed to convergence.
翻译:ℓ₁正则化用于在逆问题解中保持边缘或增强稀疏性。我们研究了Split Bregman和Majorization-Minimization迭代方法,这些方法将这一非光滑最小化问题转化为一系列步骤,其中包括求解ℓ₂正则化最小化问题。我们考虑在这些ℓ₁迭代方法的每次迭代中出现的内部广义Tikhonov正则化问题中选择正则化参数。将广义交叉验证和χ²自由度方法扩展到这些内部问题。特别地,对于χ²方法,这包括将χ²结果扩展到正则化算子行数多于列数的问题,并展示如何利用A加权广义逆在每次内部迭代中估计先验信息。图像去模糊问题的数值实验表明,在迭代方案中自动选择正则化参数比在所有迭代中保持固定参数更为有效。此外,可以在早期迭代中估计出合适的正则化参数,并固定使用直至收敛。