We study the problem of parameter-free stochastic optimization, inquiring whether, and under what conditions, do fully parameter-free methods exist: these are methods that achieve convergence rates competitive with optimally tuned methods, without requiring significant knowledge of the true problem parameters. Existing parameter-free methods can only be considered ``partially'' parameter-free, as they require some non-trivial knowledge of the true problem parameters, such as a bound on the stochastic gradient norms, a bound on the distance to a minimizer, etc. In the non-convex setting, we demonstrate that a simple hyperparameter search technique results in a fully parameter-free method that outperforms more sophisticated state-of-the-art algorithms. We also provide a similar result in the convex setting with access to noisy function values under mild noise assumptions. Finally, assuming only access to stochastic gradients, we establish a lower bound that renders fully parameter-free stochastic convex optimization infeasible, and provide a method which is (partially) parameter-free up to the limit indicated by our lower bound.
翻译:我们研究参数无关的随机优化问题,探讨在何种条件下能够实现完全无参数的方法:这类方法无需了解真实问题的关键参数,即可达到与最优调参方法相媲美的收敛速率。现有参数无关方法仅能视为“部分”无参数,因为它们仍需掌握问题的部分非平凡参数信息,例如随机梯度范数的上界、到最优解距离的界等。在非凸优化场景中,我们证明一种简单的超参数搜索技术即可构造出完全无参数的方法,其性能甚至优于更复杂的最新算法。对于凸优化场景,在温和噪声假设下且能获取带噪声函数值时,我们给出了类似结论。最后,在仅能访问随机梯度的情况下,我们建立了下界,证明完全无参数的随机凸优化不可行,并提出了一个(部分)无参数方法,其性能可达到该下界所指示的极限。