Kernel-based learning methods such as Kernel Logistic Regression (KLR) can substantially increase the storage capacity of Hopfield networks, but the principles governing their performance and stability remain largely uncharacterized. This paper presents a comprehensive quantitative analysis of the attractor landscape in KLR-trained networks to establish a solid foundation for their design and application. Through extensive, statistically validated simulations, we address critical questions of generality, scalability, and robustness. Our comparative analysis shows that KLR and Kernel Ridge Regression (KRR) exhibit similarly high storage capacities and clean attractor landscapes under typical operating conditions, suggesting that this behavior is a general property of kernel regression methods, although KRR is computationally much faster. We identify a non-trivial, scale-dependent law for the kernel width $γ$, demonstrating that optimal capacity requires $γ$ to be scaled such that $γN$ increases with network size $N$. This finding implies that larger networks require more localized kernels, in which each pattern's influence is more spatially confined, to mitigate inter-pattern interference. Under this optimized scaling, we provide clear evidence that storage capacity scales linearly with network size~($P \propto N$). Furthermore, our sensitivity analysis shows that performance is remarkably robust with respect to the choice of the regularization parameter $λ$. Collectively, these findings provide a concise set of empirical principles for designing high-capacity and robust associative memories and clarify the mechanisms that enable kernel methods to overcome the classical limitations of Hopfield-type models.
翻译:基于核的学习方法,如核逻辑回归(KLR),可以显著提高Hopfield网络的存储容量,但其性能与稳定性的基本原理在很大程度上仍未得到表征。本文对KLR训练网络中的吸引子景观进行了全面的定量分析,为其设计与应用奠定了坚实基础。通过大量经过统计验证的仿真,我们探讨了通用性、可扩展性和鲁棒性等关键问题。我们的比较分析表明,在典型运行条件下,KLR与核岭回归(KRR)表现出相似的高存储容量和清晰的吸引子景观,这表明该行为是核回归方法的普遍特性,尽管KRR在计算上快得多。我们发现了一个非平凡的、尺度依赖的核宽度$γ$定律,证明最优容量要求$γ$按比例缩放,使得$γN$随网络规模$N$增加。这一发现意味着更大的网络需要更局部化的核,其中每个模式的影响在空间上更受限,以减轻模式间干扰。在此优化缩放下,我们提供了明确的证据表明存储容量随网络规模线性扩展($P \propto N$)。此外,我们的敏感性分析显示,性能对于正则化参数$λ$的选择表现出显著的鲁棒性。总的来说,这些发现为设计高容量且鲁棒的联想记忆提供了一套简洁的经验原则,并阐明了核方法能够克服Hopfield类型模型经典限制的机制。