We consider the problem of training a neural network to store a set of patterns with maximal noise robustness. A solution, in terms of optimal weights and state update rules, is derived by training each individual neuron to perform either kernel classification or interpolation with a minimum weight norm. By applying this method to feed-forward and recurrent networks, we derive optimal models, termed kernel memory networks, that include, as special cases, many of the hetero- and auto-associative memory models that have been proposed over the past years, such as modern Hopfield networks and Kanerva's sparse distributed memory. We modify Kanerva's model and demonstrate a simple way to design a kernel memory network that can store an exponential number of continuous-valued patterns with a finite basin of attraction. The framework of kernel memory networks offers a simple and intuitive way to understand the storage capacity of previous memory models, and allows for new biological interpretations in terms of dendritic non-linearities and synaptic cross-talk.
翻译:我们研究了训练神经网络以存储一组模式并实现最大噪声鲁棒性的问题。通过训练每个神经元执行核分类或具有最小权重范数的插值,我们推导出最优权重和状态更新规则的解。将此方法应用于前馈网络和循环网络,我们推导出称为核记忆网络的最优模型,该模型将过去数十年提出的许多异联想与自联想记忆模型(如现代Hopfield网络和Kanerva稀疏分布式记忆)作为特例包含其中。我们改进了Kanerva模型,并展示了一种设计核记忆网络的简单方法,使其能够存储指数数量的连续值模式并具备有限吸引域。核记忆网络框架为理解现有记忆模型的存储容量提供了简单直观的途径,并在树突非线性和突触串扰方面启发了新的生物学解释。