We propose DropKAN (Dropout Kolmogorov-Arnold Networks) a regularization method that prevents co-adaptation of activation function weights in Kolmogorov-Arnold Networks (KANs). DropKAN operates by randomly masking some of the post-activations within the KANs computation graph, while scaling-up the retained post-activations. We show that this simple procedure that require minimal coding effort has a regularizing effect and consistently lead to better generalization of KANs. We analyze the adaptation of the standard Dropout with KANs and demonstrate that Dropout applied to KANs' neurons can lead to unpredictable behaviour in the feedforward pass. We carry an empirical study with real world Machine Learning datasets to validate our findings. Our results suggest that DropKAN is consistently a better alternative to using standard Dropout with KANs, and improves the generalization performance of KANs. Our implementation of DropKAN is available at: \url{https://github.com/Ghaith81/dropkan}.
翻译:本文提出DropKAN(Dropout Kolmogorov-Arnold Networks),一种用于防止Kolmogorov-Arnold Networks(KANs)中激活函数权重共适应的正则化方法。DropKAN通过在KAN计算图中随机掩码部分后激活函数,同时对保留的后激活函数进行放大操作来实现正则化。我们证明这种编码实现简单的操作具有正则化效果,并能持续提升KAN的泛化性能。我们分析了标准Dropout在KAN中的适配问题,论证了在KAN神经元上应用Dropout可能导致前向传播中出现不可预测的行为。我们使用真实世界机器学习数据集进行了实证研究以验证发现。结果表明,DropKAN相较于标准Dropout在KAN中的应用具有更优的替代性,并能有效提升KAN的泛化性能。DropKAN的实现代码已发布于:\url{https://github.com/Ghaith81/dropkan}。