Fast and accurate simulation of dynamical systems is a fundamental challenge across scientific and engineering domains. Traditional numerical integrators often face a trade-off between accuracy and computational efficiency, while existing neural network-based approaches typically require training a separate model for each case. To overcome these limitations, we introduce a novel multi-modal foundation model for large-scale simulations of differential equations: FMint-SDE (Foundation Model based on Initialization for stochastic differential equations). Based on a decoder-only transformer with in-context learning, FMint-SDE leverages numerical and textual modalities to learn a universal error-correction scheme. It is trained using prompted sequences of coarse solutions generated by conventional solvers, enabling broad generalization across diverse systems. We evaluate our models on a suite of challenging SDE benchmarks spanning applications in molecular dynamics, mechanical systems, finance, and biology. Experimental results show that our approach achieves a superior accuracy-efficiency tradeoff compared to classical solvers, underscoring the potential of FMint-SDE as a general-purpose simulation tool for dynamical systems.
翻译:快速而精确地模拟动力系统是科学和工程领域的一项基本挑战。传统的数值积分器常常面临精度与计算效率之间的权衡,而现有的基于神经网络的方法通常需要为每个案例单独训练模型。为克服这些限制,我们引入了一种用于大规模微分方程模拟的新型多模态基础模型:FMint-SDE(基于初始化的随机微分方程基础模型)。FMint-SDE基于仅解码器的Transformer架构,利用上下文学习能力,融合数值与文本模态来学习一种通用的误差校正方案。该模型通过传统求解器生成的粗粒度解序列进行提示训练,从而能够广泛泛化至多样化的系统。我们在涵盖分子动力学、机械系统、金融和生物学应用的一系列具有挑战性的随机微分方程基准测试上评估了我们的模型。实验结果表明,与经典求解器相比,我们的方法实现了更优的精度-效率权衡,彰显了FMint-SDE作为动力系统通用模拟工具的潜力。