Physics-informed neural networks (PINNs) provide a promising framework for solving inverse problems governed by partial differential equations (PDEs) by integrating observational data and physical constraints in a unified optimization objective. However, the ill-posed nature of PDE inverse problems makes them highly sensitive to noise. Even a small fraction of corrupted observations can distort internal neural representations, severely impairing accuracy and destabilizing training. Motivated by recent advances in machine unlearning and structured network pruning, we propose P-PINN, a selective pruning framework designed to unlearn the influence of corrupted data in a pretrained PINN. Specifically, starting from a PINN trained on the full dataset, P-PINN evaluates a joint residual--data fidelity indicator, a weighted combination of data misfit and PDE residuals, to partition the training set into reliable and corrupted subsets. Next, we introduce a bias-based neuron importance measure that quantifies directional activation discrepancies between the two subsets, identifying neurons whose representations are predominantly driven by corrupted samples. Building on this, an iterative pruning strategy then removes noise-sensitive neurons layer by layer. The resulting pruned network is fine-tuned on the reliable data subject to the original PDE constraints, acting as a lightweight post-processing stage rather than a complete retraining. Numerical experiments on extensive PDE inverse-problem benchmarks demonstrate that P-PINN substantially improves robustness, accuracy, and training stability under noisy conditions, achieving up to a 96.6\% reduction in relative error compared with baseline PINNs. These results indicate that activation-level post hoc pruning is a promising mechanism for enhancing the reliability of physics-informed learning in noise-contaminated settings.
翻译:物理信息神经网络(PINNs)通过将观测数据与物理约束统一整合于优化目标中,为求解偏微分方程(PDEs)控制的反问题提供了一个前景广阔的框架。然而,PDE反问题的不适定性使其对噪声极为敏感。即使观测数据中仅存在少量污染,也可能扭曲神经网络的内部表示,严重损害精度并破坏训练稳定性。受机器遗忘与结构化网络剪枝最新进展的启发,我们提出P-PINN,一种选择性剪枝框架,旨在从预训练的PINN中消除污染数据的影响。具体而言,P-PINN从在全数据集上训练好的PINN出发,通过评估联合残差-数据保真度指标(即数据失配与PDE残差的加权组合)将训练集划分为可靠子集与污染子集。接着,我们提出一种基于偏置的神经元重要性度量,用于量化两个子集间神经元激活的方向性差异,从而识别出那些表示主要由污染样本驱动的神经元。在此基础上,采用迭代剪枝策略逐层移除对噪声敏感的神经元。最终得到的剪枝网络在原始PDE约束下,基于可靠数据进行微调,这一过程作为轻量级的后处理阶段而非完全重新训练。在大量PDE反问题基准测试上的数值实验表明,P-PINN在噪声条件下显著提升了鲁棒性、精度与训练稳定性,相比基线PINNs实现了高达96.6%的相对误差降低。这些结果表明,基于激活水平的后验剪枝是一种增强物理信息学习在噪声污染场景下可靠性的有效机制。