Ultrafast online learning is essential for high-frequency systems, such as controls for quantum computing and nuclear fusion, where adaptation must occur on sub-microsecond timescales. Meeting these requirements demands low-latency, fixed-precision computation under strict memory constraints, a regime in which conventional Multi-Layer Perceptrons (MLPs) are both inefficient and numerically unstable. We identify key properties of Kolmogorov-Arnold Networks (KANs) that align with these constraints. Specifically, we show that: (i) KAN updates exploiting B-spline locality are sparse, enabling superior on-chip resource scaling, and (ii) KANs are inherently robust to fixed-point quantization. By implementing fixed-point online training on Field-Programmable Gate Arrays (FPGAs), a representative platform for on-chip computation, we demonstrate that KAN-based online learners are significantly more efficient and expressive than MLPs across a range of low-latency and resource-constrained tasks. To our knowledge, this work is the first to demonstrate model-free online learning at sub-microsecond latencies.
翻译:超快速在线学习对于高频系统至关重要,例如量子计算与核聚变控制系统,这些场景要求适应过程在亚微秒时间尺度内完成。满足这些需求必须在严格内存约束下进行低延迟、固定精度的计算,而传统的多层感知机(MLP)在此机制下效率低下且数值不稳定。我们发现了Kolmogorov-Arnold网络(KAN)符合这些约束的关键特性。具体而言,我们证明:(i)利用B样条局部性的KAN更新具有稀疏性,可实现优异的片上资源扩展;(ii)KAN本质上对定点量化具有鲁棒性。通过在代表片上计算平台的现场可编程门阵列(FPGA)上实现定点在线训练,我们证明了基于KAN的在线学习器在一系列低延迟和资源受限任务中,其效率与表达能力均显著优于MLP。据我们所知,本研究首次实现了亚微秒延迟下的无模型在线学习。