Optimization algorithms such as AdaGrad and Adam have significantly advanced the training of deep models by dynamically adjusting the learning rate during the optimization process. However, adhoc tuning of learning rates poses a challenge, leading to inefficiencies in practice. To address this issue, recent research has focused on developing "learning-rate-free" or "parameter-free" algorithms that operate effectively without the need for learning rate tuning. Despite these efforts, existing parameter-free variants of AdaGrad and Adam tend to be overly complex and/or lack formal convergence guarantees. In this paper, we present AdaGrad++ and Adam++, novel and simple parameter-free variants of AdaGrad and Adam with convergence guarantees. We prove that AdaGrad++ achieves comparable convergence rates to AdaGrad in convex optimization without predefined learning rate assumptions. Similarly, Adam++ matches the convergence rate of Adam without relying on any conditions on the learning rates. Experimental results across various deep learning tasks validate the competitive performance of AdaGrad++ and Adam++.
翻译:AdaGrad和Adam等优化算法通过动态调整优化过程中的学习率,显著推动了深度模型的训练。然而,学习率的临时调优在实践中带来了挑战,导致了效率低下。为解决这一问题,近期研究致力于开发无需学习率调优即可有效运行的“免学习率”或“无参数”算法。尽管已有这些努力,现有的AdaGrad和Adam的无参数变体往往过于复杂和/或缺乏形式化的收敛性保证。本文提出了AdaGrad++和Adam++,它们是具有收敛性保证的、新颖且简单的AdaGrad和Adam无参数变体。我们证明,在凸优化中,AdaGrad++无需预定义学习率假设即可达到与AdaGrad相当的收敛速率。类似地,Adam++在不依赖任何学习率条件的情况下匹配了Adam的收敛速率。在多种深度学习任务上的实验结果验证了AdaGrad++和Adam++具有竞争力的性能。