Langevin dynamics (LD) is widely used for sampling from distributions and for optimization. In this work, we derive a closed-form expression for the expected loss of preconditioned LD near stationary points of the objective function. We use the fact that at the vicinity of such points, LD reduces to an Ornstein-Uhlenbeck process, which is amenable to convenient mathematical treatment. Our analysis reveals that when the preconditioning matrix satisfies a particular relation with respect to the noise covariance, LD's expected loss becomes proportional to the rank of the objective's Hessian. We illustrate the applicability of this result in the context of neural networks, where the Hessian rank has been shown to capture the complexity of the predictor function but is usually computationally hard to probe. Finally, we use our analysis to compare SGD-like and Adam-like preconditioners and identify the regimes under which each of them leads to a lower expected loss.
翻译:朗之万动力学(LD)被广泛用于分布采样与优化问题。本文推导了目标函数驻点附近预条件化LD期望损失的闭式表达式。我们利用在驻点邻域内LD退化为便于数学处理的奥恩斯坦-乌伦贝克过程这一特性。分析表明:当预条件矩阵与噪声协方差满足特定关系时,LD的期望损失与目标函数Hessian矩阵的秩成正比。我们以神经网络为背景验证了该结果的适用性——尽管已有研究证明Hessian秩能够刻画预测函数的复杂度,但其计算通常较为困难。最后,我们基于分析比较了类SGD与类Adam预条件器,并确定了两种预条件器分别能实现更低期望损失的参数区间。