Kolmogorov-Arnold Networks (KANs) have recently emerged as a novel approach to function approximation, demonstrating remarkable potential in various domains. Despite their theoretical promise, the robustness of KANs under adversarial conditions has yet to be thoroughly examined. In this paper we explore the adversarial robustness of KANs, with a particular focus on image classification tasks. We assess the performance of KANs against standard white box and black-box adversarial attacks, comparing their resilience to that of established neural network architectures. Our experimental evaluation encompasses a variety of standard image classification benchmark datasets and investigates both fully connected and convolutional neural network architectures, of three sizes: small, medium, and large. We conclude that small- and medium-sized KANs (either fully connected or convolutional) are not consistently more robust than their standard counterparts, but that large-sized KANs are, by and large, more robust. This comprehensive evaluation of KANs in adversarial scenarios offers the first in-depth analysis of KAN security, laying the groundwork for future research in this emerging field.
翻译:Kolmogorov-Arnold网络(KANs)作为一种新颖的函数逼近方法,近期崭露头角,在多个领域展现出卓越潜力。尽管其理论前景广阔,但KANs在对抗性条件下的鲁棒性尚未得到深入检验。本文从对抗性视角探究KANs的鲁棒性,特别聚焦于图像分类任务。我们评估了KANs在标准白盒与黑盒对抗攻击下的性能,并将其鲁棒性与成熟的神经网络架构进行比较。我们的实验评估涵盖了多种标准图像分类基准数据集,并研究了三种规模(小型、中型、大型)的全连接与卷积神经网络架构。实验结果表明,中小型KANs(无论是全连接还是卷积结构)并未始终比标准网络架构表现出更强的鲁棒性,而大型KANs则总体上展现出更高的鲁棒性。这项针对对抗性场景下KANs的全面评估,首次深入分析了KANs的安全性,为这一新兴领域的未来研究奠定了基础。