Uncertainty quantification (UQ) in scientific machine learning is increasingly critical as neural networks are widely adopted to tackle complex problems across diverse scientific disciplines. For physics-informed neural networks (PINNs), a prominent model in scientific machine learning, uncertainty is typically quantified using Bayesian or dropout methods. However, both approaches suffer from a fundamental limitation: the prior distribution or dropout rate required to construct honest confidence sets cannot be determined without additional information. In this paper, we propose a novel method within the framework of extended fiducial inference (EFI) to provide rigorous uncertainty quantification for PINNs. The proposed method leverages a narrow-neck hyper-network to learn the parameters of the PINN and quantify their uncertainty based on imputed random errors in the observations. This approach overcomes the limitations of Bayesian and dropout methods, enabling the construction of honest confidence sets based solely on observed data. This advancement represents a significant breakthrough for PINNs, greatly enhancing their reliability, interpretability, and applicability to real-world scientific and engineering challenges. Moreover, it establishes a new theoretical framework for EFI, extending its application to large-scale models, eliminating the need for sparse hyper-networks, and significantly improving the automaticity and robustness of statistical inference.
翻译:科学机器学习中的不确定性量化(UQ)日益重要,因为神经网络被广泛用于解决不同科学领域的复杂问题。对于物理信息神经网络(PINNs)这一科学机器学习中的主流模型,不确定性通常通过贝叶斯或随机失活方法进行量化。然而,这两种方法都存在一个根本性局限:构建可靠置信集所需的先验分布或失活率无法在缺乏额外信息的情况下确定。本文在扩展基准推断(EFI)框架内提出一种新方法,为PINNs提供严格的不确定性量化。该方法利用窄颈超网络学习PINN的参数,并基于观测中插补的随机误差量化其不确定性。此方法克服了贝叶斯与随机失活方法的局限,能够仅依据观测数据构建可靠置信集。这一进展是PINNs领域的重大突破,显著提升了其在现实科学与工程挑战中的可靠性、可解释性与适用性。此外,该方法为EFI建立了新的理论框架,将其应用扩展至大规模模型,无需稀疏超网络,并显著提升了统计推断的自动化程度与鲁棒性。