In this paper, we study recurrent neural networks in the presence of pairwise learning rules. We are specifically interested in how the attractor landscapes of such networks become altered as a function of the strength and nature (Hebbian vs. anti-Hebbian) of learning, which may have a bearing on the ability of such rules to mediate large-scale optimization problems. Through formal analysis, we show that a transition from Hebbian to anti-Hebbian learning brings about a pitchfork bifurcation that destroys convexity in the network attractor landscape. In larger-scale settings, this implies that anti-Hebbian plasticity will bring about multiple stable equilibria, and such effects may be outsized at interconnection or `choke' points. Furthermore, attractor landscapes are more sensitive to slower learning rates than faster ones. These results provide insight into the types of objective functions that can be encoded via different pairwise plasticity rules.
翻译:本文研究了存在成对学习规则的循环神经网络。我们特别关注吸引子景观如何随着学习强度及性质(赫布型与反赫布型)的变化而发生改变,这关系到此类规则调解大规模优化问题的能力。通过形式化分析,我们证明从赫布学习到反赫布学习的转变会引发叉式分岔,从而破坏网络吸引子景观的凸性。在大规模场景中,这意味着反赫布可塑性将产生多个稳定平衡态,且此类效应在互连点或“瓶颈”节点处可能被放大。此外,吸引子景观对慢学习率比对快学习率更为敏感。这些结果揭示了可通过不同成对可塑性规则编码的目标函数类型。