We propose a general framework for conditional sampling in PDE-based inverse problems, targeting the recovery of whole solutions from extremely sparse or noisy measurements. This is accomplished by a function-space diffusion model and plug-and-play guidance for conditioning. Our method first trains an unconditional, discretization-agnostic denoising model using neural operator architectures. At inference, we refine the samples to satisfy sparse observation data via a gradient-based guidance mechanism. Through rigorous mathematical analysis, we extend Tweedie's formula to infinite-dimensional Banach spaces, providing the theoretical foundation for our posterior sampling approach. Our method (FunDPS) accurately captures posterior distributions in function spaces under minimal supervision and severe data scarcity. Across five PDE tasks with only 3% observation, our method achieves an average 32% accuracy improvement over state-of-the-art fixed-resolution diffusion baselines while reducing sampling steps by 4x. Furthermore, multi-resolution fine-tuning ensures strong cross-resolution generalizability. To the best of our knowledge, this is the first diffusion-based framework to operate independently of discretization, offering a practical and flexible solution for forward and inverse problems in the context of PDEs. Code is available at https://github.com/neuraloperator/FunDPS
翻译:本文提出了一种用于偏微分方程反问题中条件采样的通用框架,旨在从极度稀疏或含噪的测量数据中恢复完整解。该框架通过函数空间扩散模型和即插即用的条件引导机制实现。我们的方法首先利用神经算子架构训练一个与离散化方式无关的无条件去噪模型。在推理阶段,我们通过基于梯度的引导机制对样本进行优化,使其满足稀疏观测数据。通过严格的数学分析,我们将Tweedie公式推广到无限维Banach空间,为后验采样方法提供了理论基础。所提出的方法(FunDPS)能够在极少量监督和严重数据稀缺条件下,准确捕捉函数空间中的后验分布。在仅使用3%观测数据的五个偏微分方程任务中,本方法相比最先进的固定分辨率扩散基线平均精度提升32%,同时采样步骤减少至1/4。此外,多分辨率微调机制确保了强大的跨分辨率泛化能力。据我们所知,这是首个独立于离散化操作的扩散框架,为偏微分方程的正反问题提供了实用且灵活的解决方案。代码已开源:https://github.com/neuraloperator/FunDPS