Traditional neural networks employ fixed weights during inference, limiting their ability to adapt to changing input conditions, unlike biological neurons that adjust signal strength dynamically based on stimuli. This discrepancy between artificial and biological neurons constrains neural network flexibility and adaptability. To bridge this gap, we propose a novel framework for adaptive neural networks, where neuron weights are modeled as functions of the input signal, allowing the network to adjust dynamically in real-time. Importantly, we achieve this within the same traditional architecture of an Artificial Neural Network, maintaining structural familiarity while introducing dynamic adaptability. In our research, we apply Chebyshev polynomials as one of the many possible decomposition methods to achieve this adaptive weighting mechanism, with polynomial coefficients learned during training. Out of the 145 datasets tested, our adaptive Chebyshev neural network demonstrated a marked improvement over an equivalent MLP in approximately 8\% of cases, performing strictly better on 121 datasets. In the remaining 24 datasets, the performance of our algorithm matched that of the MLP, highlighting its ability to generalize standard neural network behavior while offering enhanced adaptability. As a generalized form of the MLP, this model seamlessly retains MLP performance where needed while extending its capabilities to achieve superior accuracy across a wide range of complex tasks. These results underscore the potential of adaptive neurons to enhance generalization, flexibility, and robustness in neural networks, particularly in applications with dynamic or non-linear data dependencies.
翻译:传统神经网络在推理过程中采用固定权重,限制了其适应变化输入条件的能力,这与生物神经元根据刺激动态调整信号强度的特性形成鲜明对比。人工神经元与生物神经元之间的这种差异制约了神经网络的灵活性和适应性。为弥合这一差距,我们提出了一种自适应神经网络的新框架,其中神经元权重被建模为输入信号的函数,使网络能够实时动态调整。重要的是,我们在传统人工神经网络架构内实现了这一机制,在保持结构熟悉度的同时引入了动态适应性。在本研究中,我们采用切比雪夫多项式作为多种可能的分解方法之一来实现这种自适应权重机制,多项式系数在训练过程中学习获得。在测试的145个数据集中,我们的自适应切比雪夫神经网络在大约8%的情况下表现出相对于等效多层感知器的显著改进,在121个数据集上严格优于后者。在其余24个数据集中,我们的算法性能与多层感知器持平,这凸显了其在保持标准神经网络行为泛化能力的同时提供增强适应性的优势。作为多层感知器的广义形式,该模型在需要时无缝保留了多层感知器的性能,同时扩展了其能力,在广泛复杂任务中实现了更优的准确度。这些结果证明了自适应神经元在增强神经网络泛化能力、灵活性和鲁棒性方面的潜力,特别是在具有动态或非线性数据依赖的应用场景中。