We study the problem of parameter-free stochastic optimization, inquiring whether, and under what conditions, do fully parameter-free methods exist: these are methods that achieve convergence rates competitive with optimally tuned methods, without requiring significant knowledge of the true problem parameters. Existing parameter-free methods can only be considered ``partially'' parameter-free, as they require some non-trivial knowledge of the true problem parameters, such as a bound on the stochastic gradient norms, a bound on the distance to a minimizer, etc. In the non-convex setting, we demonstrate that a simple hyperparameter search technique results in a fully parameter-free method that outperforms more sophisticated state-of-the-art algorithms. We also provide a similar result in the convex setting with access to noisy function values under mild noise assumptions. Finally, assuming only access to stochastic gradients, we establish a lower bound that renders fully parameter-free stochastic convex optimization infeasible, and provide a method which is (partially) parameter-free up to the limit indicated by our lower bound.
翻译:我们研究参数无关随机优化问题,探究在何种条件下存在完全参数无关的方法:这些方法无需了解真实问题参数的重要信息,即可达到与最优调参方法相竞争的收敛速率。现有参数无关方法只能被视为“部分”参数无关,因为它们仍需要某些非平凡的真实问题参数知识,例如随机梯度范数的界、到极小值点距离的界等。在非凸设定下,我们证明简单的超参数搜索技术可产生完全参数无关的方法,其性能优于更复杂的先进算法。在凸设定下,基于温和噪声假设的函数值噪声访问条件,我们也提供了类似结论。最后,在仅能访问随机梯度的条件下,我们建立了下界证明完全参数无关的随机凸优化不可实现,并提出一种(部分)参数无关方法,其性能达到我们下界所指示的极限。