Splitting methods are widely used for solving initial value problems (IVPs) due to their ability to simplify complicated evolutions into more manageable subproblems which can be solved efficiently and accurately. Traditionally, these methods are derived using analytic and algebraic techniques from numerical analysis, including truncated Taylor series and their Lie algebraic analogue, the Baker--Campbell--Hausdorff formula. These tools enable the development of high-order numerical methods that provide exceptional accuracy for small timesteps. Moreover, these methods often (nearly) conserve important physical invariants, such as mass, unitarity, and energy. However, in many practical applications the computational resources are limited. Thus, it is crucial to identify methods that achieve the best accuracy within a fixed computational budget, which might require taking relatively large timesteps. In this regime, high-order methods derived with traditional methods often exhibit large errors since they are only designed to be asymptotically optimal. Machine Learning techniques offer a potential solution since they can be trained to efficiently solve a given IVP with less computational resources. However, they are often purely data-driven, come with limited convergence guarantees in the small-timestep regime and do not necessarily conserve physical invariants. In this work, we propose a framework for finding machine learned splitting methods that are computationally efficient for large timesteps and have provable convergence and conservation guarantees in the small-timestep limit. We demonstrate numerically that the learned methods, which by construction converge quadratically in the timestep size, can be significantly more efficient than established methods for the Schr\"{o}dinger equation if the computational budget is limited.
翻译:分裂方法因其能够将复杂演化过程简化为更易处理的子问题而被广泛用于求解初值问题,这些子问题可以高效且精确地求解。传统上,这些方法通过数值分析中的解析和代数技术推导得出,包括截断泰勒级数及其李代数对应形式——Baker--Campbell--Hausdorff公式。这些工具使得能够开发高阶数值方法,在较小时间步长下提供卓越的精度。此外,这些方法通常(近似)保持重要的物理不变量,如质量、幺正性和能量。然而,在许多实际应用中计算资源有限。因此,在固定计算预算内识别出能达到最佳精度的方法至关重要,这可能需要使用相对较大的时间步长。在此情况下,采用传统方法推导的高阶方法往往表现出较大误差,因为它们仅被设计为渐近最优。机器学习技术提供了潜在的解决方案,因为它们可以通过训练以较少计算资源高效求解给定初值问题。然而,这些方法通常是纯数据驱动的,在小时同步长区域收敛性保证有限,且不一定保持物理不变量。本工作中,我们提出了一个框架用于寻找机器学习分裂方法,这些方法在大时间步长下计算高效,并在小时同步长极限下具有可证明的收敛性和守恒性保证。我们通过数值实验证明,在计算资源受限的情况下,针对薛定谔方程所构建的具有时间步长二次收敛性的学习方法,其计算效率可显著优于传统方法。