High-capacity associative memories based on Kernel Logistic Regression (KLR) exhibit strong storage capabilities, but the dynamical and geometric mechanisms underlying their stability remain poorly understood. This paper investigates the global geometry of attractor basins and the physical determinants of the storage limit in KLR-trained Hopfield networks. We combine empirical evaluations using random sequences and real-world image embeddings (CIFAR-10) with phenomenological morphing experiments and statistical Signal-to-Noise Ratio (SNR) analysis. Our experiments reveal that the network achieves a storage capacity for random sequences up to $P/N \approx 16$ , and maintains stable retrieval for structured data at effective loads near $P/N \approx 20$ . Through morphing analysis, we reveal that attractors on the "Ridge of Optimization" are separated by sharp, phase-transition-like boundaries, characterized by steep effective potential barriers and critical slowing down. Furthermore, by contrasting an SNR analysis with a geometric reference point inspired by Cover's theorem, we show that the ultimate storage limit is constrained primarily not by a lack of geometric separability in the feature space, but by the loss of dynamical stability against crosstalk noise. These findings suggest that KLR networks function as highly localized, exemplar-based memories that operate optimally just before the onset of dynamical collapse, providing new insights into the design of robust, large-scale retrieval systems.
翻译:暂无翻译