In this work, we propose Natural Hypergradient Descent (NHGD), a new method for solving bilevel optimization problems. To address the computational bottleneck in hypergradient estimation--namely, the need to compute or approximate Hessian inverse--we exploit the statistical structure of the inner optimization problem and use the empirical Fisher information matrix as an asymptotically consistent surrogate for the Hessian. This design enables a parallel optimize-and-approximate framework in which the Hessian-inverse approximation is updated synchronously with the stochastic inner optimization, reusing gradient information at negligible additional cost. Our main theoretical contribution establishes high-probability error bounds and sample complexity guarantees for NHGD that match those of state-of-the-art optimize-then-approximate methods, while significantly reducing computational time overhead. Empirical evaluations on representative bilevel learning tasks further demonstrate the practical advantages of NHGD, highlighting its scalability and effectiveness in large-scale machine learning settings.
翻译:本文提出自然超梯度下降法(NHGD),一种解决双层优化问题的新方法。针对超梯度估计中的计算瓶颈——即需要计算或近似海森逆矩阵——我们利用内层优化问题的统计结构,采用经验费舍尔信息矩阵作为海森矩阵的渐近一致替代量。该设计实现了并行优化-近似框架,其中海森逆近似与随机内层优化同步更新,以可忽略的额外计算成本复用梯度信息。我们的主要理论贡献在于建立了NHGD的高概率误差界和样本复杂度保证,其性能与当前最先进的优化后近似方法相当,同时显著降低了计算时间开销。在典型双层学习任务上的实证评估进一步证明了NHGD的实际优势,凸显其在大规模机器学习场景中的可扩展性和有效性。