Neural PDE surrogates are often deployed in data-limited or partially observed regimes where downstream decisions depend on calibrated uncertainty in addition to low prediction error. Existing approaches obtain uncertainty through ensemble replication, fixed stochastic noise such as dropout, or post hoc calibration. Cross-regularized uncertainty learns uncertainty parameters during training using gradients routed through a held-out regularization split. The predictor is optimized on the training split for fit, while low-dimensional uncertainty controls are optimized on the regularization split to reduce train-test mismatch, yielding regime-adaptive uncertainty without per-regime noise tuning. The framework can learn continuous noise levels at the output head, within hidden features, or within operator-specific components such as spectral modes. We instantiate the approach in Fourier Neural Operators and evaluate on APEBench sweeps over observed fraction and training-set size. Across these sweeps, the learned predictive distributions are better calibrated on held-out splits and the resulting uncertainty fields concentrate in high-error regions in one-step spatial diagnostics.
翻译:神经偏微分方程代理模型常部署于数据有限或部分可观测的场景中,此时下游决策不仅依赖于低预测误差,更依赖于经过校准的不确定性。现有方法通过集成复制、固定随机噪声(如dropout)或事后校准来获取不确定性。交叉正则化不确定性方法在训练期间,利用通过预留的正则化分割传递的梯度来学习不确定性参数。预测器在训练分割上针对拟合度进行优化,而低维不确定性控制参数则在正则化分割上进行优化,以减少训练-测试失配,从而产生无需针对每个场景进行噪声调整的自适应不确定性。该框架能够学习输出头部、隐藏特征内部或算子特定组件(如谱模式)内的连续噪声水平。我们在傅里叶神经算子中实例化了该方法,并在APEBench基准上对观测比例和训练集大小进行了参数扫描评估。在这些扫描中,学习到的预测分布在预留分割上表现出更好的校准性,且所得不确定性场在一步空间诊断中集中于高误差区域。