Motivated by the increasing popularity of overparameterized Stochastic Differential Equations (SDEs) like Neural SDEs, Wang, Blanchet and Glynn recently introduced the generator gradient estimator, a novel unbiased stochastic gradient estimator for SDEs whose computation time remains stable in the number of parameters. In this note, we demonstrate that this estimator is in fact an adjoint state method, an approach which is known to scale with the number of states and not the number of parameters in the case of Ordinary Differential Equations (ODEs). In addition, we show that the generator gradient estimator is a close analogue to the exact Integral Path Algorithm (eIPA) estimator which was introduced by Gupta, Rathinam and Khammash for a class of Continuous-Time Markov Chains (CTMCs) known as stochastic chemical reactions networks (CRNs).
翻译:受神经网络随机微分方程等超参数化随机微分方程日益普及的推动,Wang、Blanchet和Glynn近期引入了生成器梯度估计器,这是一种新颖的无偏随机梯度估计器,其计算时间在参数数量增加时保持稳定。本文指出,该估计器实际上是一种伴随状态方法,这种方法在常微分方程情形下已知其计算复杂度随状态数量而非参数数量增长。此外,我们证明生成器梯度估计器与Gupta、Rathinam和Khammash针对一类被称为随机化学反应网络的连续时间马尔可夫链所提出的精确积分路径算法估计器具有高度相似性。