Adaptive methods like Adam have become the $\textit{de facto}$ standard for large-scale vector and Euclidean optimization due to their coordinate-wise adaptation with a second-order nature. More recently, matrix-based spectral optimizers like Muon (Jordan et al., 2024b) show the power of treating weight matrices as matrices rather than long vectors. Linking these is hard because many natural generalizations are not feasible to implement, and we also cannot simply move the Adam adaptation to the matrix spectrum. To address this, we reformulate the AdaGrad update and decompose it into a variance adaptation term and a scale-invariant term. This decoupling produces $\textbf{DeVA}$ ($\textbf{De}$coupled $\textbf{V}$ariance $\textbf{A}$daptation), a framework that bridges between vector-based variance adaptation and matrix spectral optimization, enabling a seamless transition from Adam to adaptive spectral descent. Extensive experiments across language modeling and image classification demonstrate that DeVA consistently outperforms state-of-the-art methods such as Muon and SOAP (Vyas et al., 2024), reducing token usage by around 6.6\%. Theoretically, we show that the variance adaptation term effectively improves the blockwise smoothness, facilitating faster convergence. Our implementation is available at https://github.com/Tsedao/Decoupled-Variance-Adaptation
翻译:自适应方法(如Adam)因其具有二阶性质的逐坐标自适应特性,已成为大规模向量和欧几里得优化的事实标准。最近,基于矩阵的谱优化器(如Muon (Jordan等人, 2024b))展示了将权重矩阵视为矩阵而非长向量的强大能力。将二者联系起来是困难的,因为许多自然的推广难以实现,并且我们也不能简单地将Adam的自适应机制迁移到矩阵谱上。为解决此问题,我们重新表述了AdaGrad更新,并将其分解为一个方差自适应项和一个尺度不变项。这种解耦产生了**DeVA**(**解**耦**方**差**自**适应),这是一个连接基于向量的方差自适应与矩阵谱优化的框架,实现了从Adam到自适应谱下降的无缝过渡。在语言建模和图像分类上的大量实验表明,DeVA始终优于最先进的方法(如Muon和SOAP (Vyas等人, 2024)),将令牌使用量减少了约6.6%。理论上,我们证明了方差自适应项有效地改善了分块平滑性,从而促进了更快的收敛。我们的实现可在 https://github.com/Tsedao/Decoupled-Variance-Adaptation 获取。