This paper introduces a dynamic, error-bounded hierarchical matrix (H-matrix) compression method tailored for Physics-Informed Neural Networks (PINNs). The proposed approach reduces the computational complexity and memory demands of large-scale physics-based models while preserving the essential properties of the Neural Tangent Kernel (NTK). By adaptively refining hierarchical matrix approximations based on local error estimates, our method ensures efficient training and robust model performance. Empirical results demonstrate that this technique outperforms traditional compression methods, such as Singular Value Decomposition (SVD), pruning, and quantization, by maintaining high accuracy and improving generalization capabilities. Additionally, the dynamic H-matrix method enhances inference speed, making it suitable for real-time applications. This approach offers a scalable and efficient solution for deploying PINNs in complex scientific and engineering domains, bridging the gap between computational feasibility and real-world applicability.
翻译:本文提出了一种专为物理信息神经网络(PINNs)设计的动态误差有界分层矩阵(H-矩阵)压缩方法。该方法在保持神经正切核(NTK)基本特性的同时,降低了大规模物理模型的计算复杂度和内存需求。通过基于局部误差估计自适应优化分层矩阵近似,我们的方法确保了高效的训练和稳健的模型性能。实验结果表明,该技术通过保持高精度和提升泛化能力,在性能上超越了传统压缩方法,如奇异值分解(SVD)、剪枝和量化。此外,动态H-矩阵方法提升了推理速度,使其适用于实时应用场景。该方法为在复杂科学与工程领域部署PINNs提供了可扩展的高效解决方案,弥合了计算可行性与实际应用需求之间的鸿沟。