Pretraining methods gain increasing attraction recently for solving PDEs with neural operators. It alleviates the data scarcity problem encountered by neural operator learning when solving single PDE via training on large-scale datasets consisting of various PDEs and utilizing shared patterns among different PDEs to improve the solution precision. In this work, we propose the Latent Neural Operator Pretraining (LNOP) framework based on the Latent Neural Operator (LNO) backbone. We achieve universal transformation through pretraining on hybrid time-dependent PDE dataset to extract representations of different physical systems and solve various time-dependent PDEs in the latent space through finetuning on single PDE dataset. Our proposed LNOP framework reduces the solution error by 31.7% on four problems and can be further improved to 57.1% after finetuning. On out-of-distribution dataset, our LNOP model achieves roughly 50% lower error and 3$\times$ data efficiency on average across different dataset sizes. These results show that our method is more competitive in terms of solution precision, transfer capability and data efficiency compared to non-pretrained neural operators.
翻译:近年来,预训练方法在利用神经算子求解偏微分方程领域日益受到关注。该方法通过在大规模包含多种偏微分方程的数据集上进行训练,缓解了神经算子学习在求解单一方程时面临的数据稀缺问题,并利用不同偏微分方程间的共享模式来提高求解精度。本文中,我们基于隐式神经算子主干网络,提出了隐式神经算子预训练框架。我们通过在混合时变偏微分方程数据集上进行预训练,实现了对各类物理系统的通用表征提取,进而通过在单一偏微分方程数据集上的微调,在隐式空间中求解多种时变偏微分方程。我们提出的LNOP框架在四个问题上将求解误差降低了31.7%,经过微调后误差可进一步降低57.1%。在分布外数据集上,我们的LNOP模型在不同数据规模下平均实现了约50%的误差降低和3倍的数据效率提升。这些结果表明,与非预训练的神经算子相比,我们的方法在求解精度、迁移能力和数据效率方面更具竞争力。