$\ell_1$ regularization is used to preserve edges or enforce sparsity in a solution to an inverse problem. We investigate the Split Bregman and the Majorization-Minimization iterative methods that turn this non-smooth minimization problem into a sequence of steps that include solving an $\ell_2$-regularized minimization problem. We consider selecting the regularization parameter in the inner generalized Tikhonov regularization problems that occur at each iteration in these $\ell_1$ iterative methods. The generalized cross validation and $\chi^2$ degrees of freedom methods are extended to these inner problems. In particular, for the $\chi^2$ method this includes extending the $\chi^2$ result for problems in which the regularization operator has more rows than columns, and showing how to use the $A-$weighted generalized inverse to estimate prior information at each inner iteration. Numerical experiments for image deblurring problems demonstrate that it is more effective to select the regularization parameter automatically within the iterative schemes than to keep it fixed for all iterations. Moreover, an appropriate regularization parameter can be estimated in the early iterations and used fixed to convergence.
翻译:ℓ₁正则化用于在反问题求解中保持边缘或强制执行稀疏性。我们研究了将非光滑最小化问题转化为一系列步骤(包括求解ℓ₂正则化最小化问题)的分裂布雷格曼(Split Bregman)与最小化-最大化(Majorization-Minimization)迭代方法。我们考虑在这些ℓ₁迭代方法的每次迭代中,对内层广义吉洪诺夫正则化问题中的正则化参数进行选择。将广义交叉验证(GCV)和χ²自由度方法推广至这些内层问题。特别地,针对χ²方法,我们将其结果推广至正则化算子行数多于列数的问题,并展示了如何利用A-加权广义逆估计每次内层迭代中的先验信息。针对图像去模糊问题的数值实验表明,在迭代过程中自动选择正则化参数比在所有迭代中固定该参数更为有效。此外,可以在早期迭代中估计出合适的正则化参数,并将其固定用于后续收敛过程。