We introduce a principled learning to optimize (L2O) framework for solving fixed-point problems involving general nonexpansive mappings. Our idea is to deliberately inject summable perturbations into a standard Krasnosel'skii-Mann iteration to improve its average-case performance over a specific distribution of problems while retaining its convergence guarantees. Under a metric sub-regularity assumption, we prove that the proposed parametrization includes only iterations that locally achieve linear convergence-up to a vanishing bias term-and that it encompasses all iterations that do so at a sufficiently fast rate. We then demonstrate how our framework can be used to augment several widely-used operator splitting methods to accelerate the solution of structured monotone inclusion problems, and validate our approach on a best approximation problem using an L2O-augmented Douglas-Rachford splitting algorithm.
翻译:我们提出了一种基于学习优化(L2O)的理论框架,用于求解涉及一般非扩张映射的不动点问题。其核心思想是在标准的Krasnosel'skii-Mann迭代中,有目的地引入可求和扰动,以提升其在特定问题分布上的平均性能,同时保持收敛性保证。在度量次正则性假设下,我们证明了所提出的参数化方法仅包含局部实现线性收敛(至可忽略的偏差项)的迭代过程,并且涵盖了所有以足够快速度实现该收敛的迭代。随后,我们展示了如何利用该框架增强多种广泛使用的算子分裂方法,以加速结构化单调包含问题的求解,并通过采用L2O增强的Douglas-Rachford分裂算法,在最佳逼近问题上验证了该方法的有效性。