In many problems in physics and engineering, one encounters complicated differential equations with strongly scale-dependent terms for which exact analytical or numerical solutions are not available. A common strategy is to divide the domain into several regions (patches) and simplify the equation in each region. When approximate analytic solutions can be obtained in each patch, they are then matched at the interfaces to construct a global solution. However, this patching procedure can fail to reproduce the correct solution, since the approximate forms may break down near the matching boundaries. In this work, we propose a learning framework in which the integration constants of asymptotic analytic solutions are promoted to scale-dependent functions. By constraining these coefficient functions with the original differential equation over the domain, the network learns a globally valid solution that smoothly interpolates between asymptotic regimes, eliminating the need for arbitrary boundary matching. We demonstrate the effectiveness of this framework in representative problems from chemical kinetics and cosmology, where it accurately reproduces global solutions and outperforms conventional matching procedures.
翻译:在物理学和工程学的许多问题中,人们常会遇到具有强烈尺度依赖项的复杂微分方程,这些方程既无精确解析解,也无数值解。一种常见策略是将定义域划分为若干区域(分片),并在每个区域简化方程。当能在每个分片中获得近似解析解时,便可通过在界面处匹配这些解来构建全局解。然而,这种分片匹配过程可能无法重现正确解,因为近似形式在匹配边界附近可能失效。本文提出一种学习框架,将渐近解析解的积分常数提升为尺度依赖函数。通过在整个定义域内用原始微分方程约束这些系数函数,网络能够学习到一个全局有效的解,该解可在渐近区域之间平滑插值,从而消除了任意边界匹配的需求。我们在化学动力学和宇宙学的典型问题中验证了该框架的有效性,其不仅精确复现了全局解,而且性能优于传统匹配方法。