Neural networks have emerged as powerful tools for modeling complex physical systems, yet balancing high accuracy with computational efficiency remains a critical challenge in their convergence behavior. In this work, we propose the Hybrid Parallel Kolmogorov-Arnold Network (KAN) and Multi-Layer Perceptron (MLP) Physics-Informed Neural Network (HPKM-PINN), a novel architecture that synergistically integrates parallelized KAN and MLP branches within a unified PINN framework. The HPKM-PINN introduces a scaling factor {\xi}, to optimally balance the complementary strengths of KAN's interpretable function approximation and MLP's nonlinear feature learning, thereby enhancing predictive performance through a weighted fusion of their outputs. Through systematic numerical evaluations, we elucidate the impact of the scaling factor {\xi} on the model's performance in both function approximation and partial differential equation (PDE) solving tasks. Benchmark experiments across canonical PDEs, such as the Poisson and Advection equations, demonstrate that HPKM-PINN achieves a marked decrease in loss values (reducing relative error by two orders of magnitude) compared to standalone KAN or MLP models. Furthermore, the framework exhibits numerical stability and robustness when applied to various physical systems. These findings highlight the HPKM-PINN's ability to leverage KAN's interpretability and MLP's expressivity, positioning it as a versatile and scalable tool for solving complex PDE-driven problems in computational science and engineering.
翻译:神经网络已成为建模复杂物理系统的强大工具,然而在其收敛行为中平衡高精度与计算效率仍是一个关键挑战。本文提出混合并行Kolmogorov-Arnold网络(KAN)与多层感知机(MLP)物理信息神经网络(HPKM-PINN),这是一种在统一PINN框架内协同整合并行化KAN与MLP分支的新型架构。HPKM-PINN引入缩放因子ξ,以最优方式平衡KAN可解释函数逼近与MLP非线性特征学习的互补优势,从而通过加权融合其输出提升预测性能。通过系统数值评估,我们阐明了缩放因子ξ在函数逼近和偏微分方程求解任务中对模型性能的影响。在典型偏微分方程(如泊松方程和平流方程)上的基准实验表明,相较于独立的KAN或MLP模型,HPKM-PINN实现了损失值的显著降低(相对误差减少两个数量级)。此外,该框架在应用于各类物理系统时展现出数值稳定性和鲁棒性。这些发现凸显了HPKM-PINN利用KAN可解释性与MLP表达能力的优势,使其成为解决计算科学与工程中复杂偏微分方程驱动问题的通用且可扩展的工具。