Hebbian learning limits Hopfield network storage capacity (pattern-to-neuron ratio around 0.14). We propose Kernel Logistic Regression (KLR) learning. Unlike linear methods, KLR uses kernels to implicitly map patterns to high-dimensional feature space, enhancing separability. By learning dual variables, KLR dramatically improves storage capacity, achieving perfect recall even when pattern numbers exceed neuron numbers (up to ratio 1.5 shown), and enhances noise robustness. KLR demonstrably outperforms Hebbian and linear logistic regression approaches.
翻译:Hebbian学习限制了Hopfield网络的存储容量(模式与神经元数量比约为0.14)。本文提出核逻辑回归(KLR)学习算法。与线性方法不同,KLR利用核函数将模式隐式映射到高维特征空间,从而增强模式的可分离性。通过学习对偶变量,KLR显著提升了存储容量,即使在模式数量超过神经元数量的情况下(实验展示比例可达1.5)仍能实现完美回忆,并增强了噪声鲁棒性。实验证明KLR在性能上明显优于Hebbian学习及线性逻辑回归方法。