Inverse problems arise in many applications, especially tomographic imaging. We develop a Learned Alternating Minimization Algorithm (LAMA) to solve such problems via two-block optimization by synergizing data-driven and classical techniques with proven convergence. LAMA is naturally induced by a variational model with learnable regularizers in both data and image domains, parameterized as composite functions of neural networks trained with domain-specific data. We allow these regularizers to be nonconvex and nonsmooth to extract features from data effectively. We minimize the overall objective function using Nesterov's smoothing technique and residual learning architecture. It is demonstrated that LAMA reduces network complexity, improves memory efficiency, and enhances reconstruction accuracy, stability, and interpretability. Extensive experiments show that LAMA significantly outperforms state-of-the-art methods on popular benchmark datasets for Computed Tomography.
翻译:逆问题广泛存在于多种应用领域,尤其在断层成像中。本文提出一种学习型交替最小化算法(LAMA),通过融合数据驱动方法与经典技术,以双块优化方式求解此类问题,并具有可证明的收敛性。LAMA 自然导源于一个在数据域和图像域均包含可学习正则化器的变分模型,这些正则化器被参数化为由领域特定数据训练的神经网络复合函数。我们允许这些正则化器为非凸非光滑函数,以有效提取数据特征。整体目标函数通过Nesterov平滑技术与残差学习架构进行最小化。实验表明,LAMA 能够降低网络复杂度、提升内存效率,并显著改善重建精度、稳定性与可解释性。大量实验证明,在计算断层扫描的常用基准数据集上,LAMA 的性能显著优于当前最先进方法。