Kolmogorov-Arnold Networks (KANs) have recently demonstrated promising potential in scientific machine learning, partly due to their capacity for grid adaptation during training. However, existing adaptation strategies rely solely on input data density, failing to account for the geometric complexity of the target function or metrics calculated during network training. In this work, we propose a generalized framework that treats knot allocation as a density estimation task governed by Importance Density Functions (IDFs), allowing training dynamics to determine grid resolution. We introduce a curvature-based adaptation strategy and evaluate it across synthetic function fitting, regression on a subset of the Feynman dataset and different instances of the Helmholtz PDE, demonstrating that it significantly outperforms the standard input-based baseline. Specifically, our method yields average relative error reductions of 25.3% on synthetic functions, 9.4% on the Feynman dataset, and 23.3% on the PDE benchmark. Statistical significance is confirmed via Wilcoxon signed-rank tests, establishing curvature-based adaptation as a robust and computationally efficient alternative for KAN training.
翻译:Kolmogorov-Arnold网络(KANs)近期在科学机器学习中展现出可观潜力,部分归功于其在训练过程中进行网格自适应的能力。然而,现有自适应策略仅依赖于输入数据密度,未能考虑目标函数的几何复杂性或在网络训练期间计算得到的度量指标。本文提出一种广义框架,将节点分配视为由重要性密度函数(IDFs)控制的密度估计任务,允许训练动态决定网格分辨率。我们引入一种基于曲率的自适应策略,并在合成函数拟合、Feynman数据集子集回归以及不同Helmholtz PDE实例上对其进行了评估,结果表明该策略显著优于标准的基于输入的基线方法。具体而言,我们的方法在合成函数上实现了25.3%的平均相对误差降低,在Feynman数据集上降低9.4%,在PDE基准测试中降低23.3%。通过Wilcoxon符号秩检验确认了统计显著性,从而确立了基于曲率的自适应作为KAN训练中一种鲁棒且计算高效的替代方案。