Artificial neural networks are often interpreted as abstract models of biological neuronal networks, but they are typically trained using the biologically unrealistic backpropagation algorithm and its variants. Predictive coding has been proposed as a potentially more biologically realistic alternative to backpropagation for training neural networks. This manuscript reviews and extends recent work on the mathematical relationship between predictive coding and backpropagation for training feedforward artificial neural networks on supervised learning tasks. Implications of these results for the interpretation of predictive coding and deep neural networks as models of biological learning are discussed along with a repository of functions, Torch2PC, for performing predictive coding with PyTorch neural network models.
翻译:人工神经网络常被解释为生物神经网络的抽象模型,但其训练通常采用生物特性不现实的反向传播算法及其变体。预测编码被提出作为训练神经网络的潜在更符合生物现实的替代方案。本文回顾并扩展了在监督学习任务中训练前馈人工神经网络时,预测编码与反向传播之间数学关系的最新研究。讨论了这些结果对将预测编码和深度神经网络作为生物学习模型进行解释的意义,同时提供了用于执行预测编码的PyTorch神经网络模型函数库Torch2PC。