High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching.
翻译:随机优化中的高概率保证通常仅在强噪声假设(如亚高斯尾部)下才能获得。本文表明,通过发展一种随机邻近点方法,在更弱的方差有界假设下也能实现此类保证。该方法将固有降低方差的邻近子问题求解器与概率增强器相结合,后者能将每次迭代的可靠性放大为高置信度结果。分析表明该方法能在低样本复杂度下收敛,且无需限制性噪声假设或依赖小批量处理。