This paper addresses the task of learning convex regularizers to guide the reconstruction of images from limited data. By imposing that the reconstruction be amplitude-equivariant, we narrow down the class of admissible functionals to those that can be expressed as a power of a seminorm. We then show that such functionals can be approximated to arbitrary precision with the help of polyhedral norms. In particular, we identify two dual parameterizations of such systems: (i) a synthesis form with an $\ell_1$-penalty that involves some learnable dictionary; and (ii) an analysis form with an $\ell_\infty$-penalty that involves a trainable regularization operator. After having provided geometric insights and proved that the two forms are universal, we propose an implementation that relies on a specific architecture (tight frame with a weighted $\ell_1$ penalty) that is easy to train. We illustrate its use for denoising and the reconstruction of biomedical images. We find that the proposed framework outperforms the sparsity-based methods of compressed sensing, while it offers essentially the same convergence and robustness guarantees.
翻译:本文研究学习凸正则化器以指导从有限数据重建图像的任务。通过施加重建的幅度等变性约束,我们将可容许泛函类限定为可表示为半范数幂次的形式。随后证明此类泛函可借助多面体范数实现任意精度逼近。特别地,我们识别出该系统的两种对偶参数化形式:(i) 包含可学习字典的$\ell_1$惩罚项合成形式;(ii) 包含可训练正则化算子的$\ell_\infty$惩罚项分析形式。在提供几何解释并证明两种形式具有普适性后,我们提出基于特定架构(具有加权$\ell_1$惩罚的紧框架)的实现方案,该架构易于训练。我们通过去噪和生物医学图像重建任务验证其有效性。研究发现,所提框架在性能上优于压缩感知的稀疏方法,同时提供基本相同的收敛性与鲁棒性保证。