We provide abstract, general and highly uniform rates of asymptotic regularity for a generalized stochastic Halpern-style iteration, which incorporates a second mapping in the style of a Krasnoselskii-Mann iteration. This iteration is general in two ways: First, it incorporates stochasticity in a completely abstract way rather than fixing a sampling method; secondly, it includes as special cases stochastic versions of various schemes from the optimization literature, including Halpern's iteration as well as a Krasnoselskii-Mann iteration with Tikhonov regularization terms in the sense of Bo\c{t}, Csetnek and Meier. For these particular cases, we in particular obtain linear rates of asymptotic regularity, matching (or improving) the currently best known rates for these iterations in stochastic optimization, and quadratic rates of asymptotic regularity are obtained in the context of inner product spaces for the general iteration. We utilize these rates to give bounds on the oracle complexity of such iterations under suitable variance assumptions and batching strategies, again presented in an abstract style. Finally, we sketch how the schemes presented here can be instantiated in the context of reinforcement learning to yield novel methods for Q-learning.
翻译:本文针对一类广义随机Halpern型迭代格式,提供了抽象、通用且高度一致的渐近正则性收敛速率分析。该迭代格式以Krasnoselskii-Mann迭代的风格引入了第二个映射。其通用性体现在两个方面:首先,它以完全抽象的方式纳入随机性,而非限定具体的采样方法;其次,它涵盖了优化文献中多种算法的随机版本作为特例,包括Halpern迭代以及Boţ、Csetnek与Meier提出的带Tikhonov正则化项的Krasnoselskii-Mann迭代。针对这些特例,我们尤其获得了渐近正则性的线性收敛速率,该结果匹配(或改进了)随机优化中当前已知的最佳收敛速率;而在内积空间背景下,该广义迭代格式可获得二次收敛速率。我们利用这些收敛速率,在适当的方差假设与批处理策略下,给出了此类迭代的Oracle复杂度界,该分析同样以抽象形式呈现。最后,我们概述了如何将本文提出的框架实例化于强化学习场景,从而为Q学习设计新型算法。