Pretraining methods gain increasing attraction recently for solving PDEs with neural operators. It alleviates the data scarcity problem encountered by neural operator learning when solving single PDE via training on large-scale datasets consisting of various PDEs and utilizing shared patterns among different PDEs to improve the solution precision. In this work, we propose the Latent Neural Operator Pretraining (LNOP) framework based on the Latent Neural Operator (LNO) backbone. We achieve universal transformation through pretraining on hybrid time-dependent PDE dataset to extract representations of different physical systems and solve various time-dependent PDEs in the latent space through finetuning on single PDE dataset. Our proposed LNOP framework reduces the solution error by 31.7% on four problems and can be further improved to 57.1% after finetuning. On out-of-distribution dataset, our LNOP model achieves roughly 50% lower error and 3$\times$ data efficiency on average across different dataset sizes. These results show that our method is more competitive in terms of solution precision, transfer capability and data efficiency compared to non-pretrained neural operators.
翻译:近年来,预训练方法在利用神经算子求解偏微分方程领域日益受到关注。该方法通过在大规模包含多种偏微分方程的数据集上进行训练,缓解了神经算子在求解单一方程时面临的数据稀缺问题,并利用不同偏微分方程间的共享模式来提高求解精度。本文中,我们基于潜在神经算子主干网络,提出了潜在神经算子预训练框架。我们通过在混合时间相关偏微分方程数据集上进行预训练,实现了对不同物理系统表征的通用提取,并通过在单一偏微分方程数据集上进行微调,在潜在空间中求解各类时间相关偏微分方程。我们提出的潜在神经算子预训练框架在四个问题上将求解误差降低了31.7%,经微调后误差可进一步降低57.1%。在分布外数据集上,我们的潜在神经算子预训练模型在不同数据规模下平均实现了约50%的误差降低和3$\times$的数据效率提升。这些结果表明,与非预训练的神经算子相比,我们的方法在求解精度、迁移能力和数据效率方面更具竞争力。