Feedback Alignment (FA) methods are biologically inspired local learning rules for training neural networks with reduced communication between layers. While FA has potential applications in distributed and privacy-aware ML, limitations in multi-class classification and lack of theoretical understanding of the alignment mechanism have constrained its impact. This study introduces a unified framework elucidating the operational principles behind alignment in FA. Our key contributions include: (1) a novel conservation law linking changes in synaptic weights to implicit regularization that maintains alignment with the gradient, with support from experiments, (2) sufficient conditions for convergence based on the concept of alignment dominance, and (3) empirical analysis showing better alignment can enhance FA performance on complex multi-class tasks. Overall, these theoretical and practical advancements improve interpretability of bio-plausible learning rules and provide groundwork for developing enhanced FA algorithms.
翻译:反馈对齐(FA)方法是一种受生物学启发的局部学习规则,用于训练神经网络,同时减少层间的通信开销。尽管FA在分布式和隐私敏感的机器学习中具有潜在应用价值,但其在多类分类任务中的局限性以及对对齐机制缺乏理论理解,限制了其影响力。本研究提出了一个统一框架,用以阐明FA中反馈对齐机制背后的运作原理。我们的主要贡献包括:(1)一个新颖的守恒定律,将突触权重的变化与保持梯度对齐的隐式正则化联系起来,并通过实验验证;(2)基于对齐主导性概念的收敛性充分条件;(3)实证分析表明,更好的对齐可以提升FA在复杂多类任务上的性能。总体而言,这些理论和实践进展提高了生物合理学习规则的可解释性,并为开发增强的FA算法奠定了基础。