Backward stochastic differential equation (BSDE)-based deep learning methods provide an alternative to Physics-Informed Neural Networks (PINNs) for solving high-dimensional partial differential equations (PDEs), offering potential algorithmic advantages in settings such as stochastic optimal control, where the PDEs of interest are tied to an underlying dynamical system. However, standard BSDE-based solvers have empirically been shown to underperform relative to PINNs in the literature. In this paper, we identify the root cause of this performance gap as a discretization bias introduced by the standard Euler-Maruyama (EM) integration scheme applied to one-step self-consistency BSDE losses, which shifts the optimization landscape off target. We find that this bias cannot be satisfactorily addressed through finer step-sizes or multi-step self-consistency losses. To properly handle this issue, we propose a Stratonovich-based BSDE formulation, which we implement with stochastic Heun integration. We show that our proposed approach completely eliminates the bias issues faced by EM integration. Furthermore, our empirical results show that our Heun-based BSDE method consistently outperforms EM-based variants and achieves competitive results with PINNs across multiple high-dimensional benchmarks. Our findings highlight the critical role of integration schemes in BSDE-based PDE solvers, an algorithmic detail that has received little attention thus far in the literature.
翻译:基于反向随机微分方程(BSDE)的深度学习方法为求解高维偏微分方程(PDE)提供了物理信息神经网络(PINNs)之外的替代方案,在随机最优控制等场景中展现出潜在的算法优势——这类场景中待求解的PDE与底层动力系统紧密关联。然而,现有文献表明标准BSDE求解器在实证中往往表现不及PINNs。本文发现该性能差距的根本原因在于:应用于单步自洽BSDE损失函数的标准欧拉-丸山(EM)积分方案会引入离散化偏差,从而使优化目标发生偏移。研究发现,该偏差无法通过减小步长或多步自洽损失函数得到有效解决。为妥善处理此问题,我们提出基于Stratonovich积分的BSDE表述框架,并采用随机Heun积分法实现。理论证明该方法能完全消除EM积分面临的偏差问题。实证结果表明,基于Heun积分的BSDE方法在多个高维基准测试中不仅持续优于EM变体,更取得了与PINNs相竞争的结果。本研究揭示了积分方案在BSDE类PDE求解器中的关键作用,这一在现有文献中鲜受关注的算法细节具有重要意义。