Predictive coding (PC) is an influential theory of information processing in the brain, providing a biologically plausible alternative to backpropagation. It is motivated in terms of Bayesian inference, as hidden states and parameters are optimised via gradient descent on variational free energy. However, implementations of PC rely on maximum \textit{a posteriori} (MAP) estimates of hidden states and maximum likelihood (ML) estimates of parameters, limiting their ability to quantify epistemic uncertainty. In this work, we investigate a Bayesian extension to PC that estimates a posterior distribution over network parameters. This approach, termed Bayesian Predictive coding (BPC), preserves the locality of PC and results in closed-form Hebbian weight updates. Compared to PC, our BPC algorithm converges in fewer epochs in the full-batch setting and remains competitive in the mini-batch setting. Additionally, we demonstrate that BPC offers uncertainty quantification comparable to existing methods in Bayesian deep learning, while also improving convergence properties. Together, these results suggest that BPC provides a biologically plausible method for Bayesian learning in the brain, as well as an attractive approach to uncertainty quantification in deep learning.
翻译:预测编码(PC)是大脑信息处理领域的一个有影响力的理论,为反向传播提供了一种生物学上可行的替代方案。其动机源于贝叶斯推断,因为隐藏状态和参数通过变分自由能的梯度下降进行优化。然而,PC的实现依赖于隐藏状态的最大后验(MAP)估计和参数的最大似然(ML)估计,这限制了其量化认知不确定性的能力。在本工作中,我们研究了一种贝叶斯扩展的PC方法,该方法估计网络参数的后验分布。这种称为贝叶斯预测编码(BPC)的方法保留了PC的局部性,并产生了闭式的赫布权重更新。与PC相比,我们的BPC算法在全批次设置下以更少的轮次收敛,并在小批次设置下保持竞争力。此外,我们证明BPC提供的认知不确定性量化与贝叶斯深度学习中的现有方法相当,同时还能改善收敛特性。总之,这些结果表明,BPC为大脑中的贝叶斯学习提供了一种生物学上可行的方法,同时也为深度学习中的不确定性量化提供了一种有吸引力的途径。