Neural surrogates for partial differential equations (PDEs) have become popular due to their potential to quickly simulate physics. With a few exceptions, neural surrogates generally treat the forward evolution of time-dependent PDEs as a black box by directly predicting the next state. While this is a natural and easy framework for applying neural surrogates, it can be an over-simplified and rigid framework for predicting physics. In this work, we propose an alternative framework in which neural solvers predict the temporal derivative and an ODE integrator forwards the solution in time, which has little overhead and is broadly applicable across model architectures and PDEs. We find that by simply changing the training target and introducing numerical integration during inference, neural surrogates can gain accuracy and stability. Predicting temporal derivatives also allows models to not be constrained to a specific temporal discretization, allowing for flexible time-stepping during inference or training on higher-resolution PDE data. Lastly, we investigate why this new framework can be beneficial and in what situations does it work well.
翻译:偏微分方程(PDE)的神经代理模型因其快速模拟物理现象的潜力而日益流行。除少数例外,神经代理模型通常将时间相关偏微分方程的前向演化视为黑箱,直接预测下一时刻的状态。尽管这是应用神经代理模型的一种自然且简便的框架,但对于物理预测而言,这可能是一种过度简化且僵化的框架。在本研究中,我们提出一种替代框架:神经求解器预测时间导数,并通过常微分方程积分器推进时间演化。该框架计算开销极小,且广泛适用于不同模型架构与偏微分方程类型。我们发现,仅通过改变训练目标并在推理阶段引入数值积分,神经代理模型即可显著提升精度与稳定性。预测时间导数还使模型不受特定时间离散化方案的约束,从而支持推理阶段的灵活时间步进或基于更高分辨率偏微分方程数据的训练。最后,我们深入探讨了新框架的优势机理及其适用场景。