In this paper, we propose a pre-trained foundation model \textbf{FMint} (\textbf{F}oundation \textbf{M}odel based on \textbf{In}i\textbf{t}ialization), designed to speed up large-scale simulations of various differential equations with high accuracy via error correction. Human-designed simulation algorithms excel at capturing the fundamental physics of engineering problems, but often need to balance the trade-off between accuracy and efficiency. While deep learning methods offer innovative solutions across numerous scientific fields, they frequently fall short in domain-specific knowledge. FMint bridges these gaps through conditioning on the initial coarse solutions obtained from conventional human-designed algorithms, and trained to obtain refined solutions for various differential equations. Based on the backbone of large language models, we adapt the in-context learning scheme to learn a universal error correction method for dynamical systems from given prompted sequences of coarse solutions. The model is pre-trained on a corpus of 600K ordinary differential equations (ODEs), and we conduct extensive experiments on both in-distribution and out-of-distribution tasks. FMint outperforms various baselines on large-scale simulation, and demonstrates its capability in generalization to unseen ODEs. Our approach achieves an accuracy improvement of 1 to 2 orders of magnitude over state-of-the-art dynamical system simulators, and delivers a 5X speedup compared to traditional numerical algorithms.
翻译:本文提出一种预训练基础模型 \textbf{FMint}(基于初始化的基础模型),旨在通过误差校正,以高精度加速各类微分方程的大规模仿真。人工设计的仿真算法擅长捕捉工程问题的基本物理原理,但往往需要在精度与效率之间权衡取舍。尽管深度学习方法为众多科学领域提供了创新性解决方案,但其在领域特定知识方面常显不足。FMint 通过以传统人工设计算法获得的初始粗解为条件进行训练,从而为各类微分方程获取精细化解,以此弥合上述差距。基于大语言模型架构,我们采用上下文学习机制,从给定的粗解提示序列中学习适用于动态系统的通用误差校正方法。该模型在包含 60 万个常微分方程(ODEs)的语料库上进行预训练,并在分布内与分布外任务上开展了大量实验。FMint 在大规模仿真任务中表现优于多种基线模型,并展现出对未见 ODEs 的泛化能力。我们的方法相较于最先进的动态系统仿真器实现了 1 至 2 个数量级的精度提升,相比传统数值算法获得了 5 倍的加速效果。