Biological neural networks seem qualitatively superior (e.g. in learning, flexibility, robustness) to current artificial like Multi-Layer Perceptron (MLP) or Kolmogorov-Arnold Network (KAN). Simultaneously, in contrast to them: biological have fundamentally multidirectional signal propagation \cite{axon}, also of probability distributions e.g. for uncertainty estimation, and are believed not being able to use standard backpropagation training \cite{backprop}. There are proposed novel artificial neurons based on HCR (Hierarchical Correlation Reconstruction) allowing to remove the above low level differences: with neurons containing local joint distribution model (of its connections), representing joint density on normalized variables as just linear combination of $(f_\mathbf{j})$ orthonormal polynomials: $\rho(\mathbf{x})=\sum_{\mathbf{j}\in B} a_\mathbf{j} f_\mathbf{j}(\mathbf{x})$ for $\mathbf{x} \in [0,1]^d$ and $B\subset \mathbb{N}^d$ some chosen basis. By various index summations of such $(a_\mathbf{j})_{\mathbf{j}\in B}$ tensor as neuron parameters, we get simple formulas for e.g. conditional expected values for propagation in any direction, like $E[x|y,z]$, $E[y|x]$, which degenerate to KAN-like parametrization if restricting to pairwise dependencies. Such HCR network can also propagate probability distributions (also joint) like $\rho(y,z|x)$. It also allows for additional training approaches, like direct $(a_\mathbf{j})$ estimation, through tensor decomposition, or more biologically plausible information bottleneck training: layers directly influencing only neighbors, optimizing content to maximize information about the next layer, and minimizing about the previous to remove noise, extract crucial information.
翻译:生物神经网络在诸多方面(如学习能力、灵活性和鲁棒性)似乎优于当前的人工神经网络,例如多层感知机(MLP)或柯尔莫哥洛夫-阿诺德网络(KAN)。与此同时,与人工神经网络相比,生物神经网络具有根本性的多向信号传播特性(参见文献\cite{axon}),并能传播概率分布(例如用于不确定性估计),且被认为无法使用标准的反向传播训练方法(参见文献\cite{backprop})。本文提出一种基于分层相关重构(HCR)的新型人工神经元,旨在消除上述底层差异:该神经元包含局部联合分布模型(针对其连接),将归一化变量上的联合密度表示为$(f_\mathbf{j})$正交多项式的线性组合:$\rho(\mathbf{x})=\sum_{\mathbf{j}\in B} a_\mathbf{j} f_\mathbf{j}(\mathbf{x})$,其中$\mathbf{x} \in [0,1]^d$,$B\subset \mathbb{N}^d$为选定的基组。通过对此类作为神经元参数的张量$(a_\mathbf{j})_{\mathbf{j}\in B}$进行不同指标的求和运算,我们可以推导出适用于任意方向传播的简洁计算公式,例如$E[x|y,z]$、$E[y|x]$等条件期望值。若将依赖关系限制为成对形式,此类公式可退化为类似KAN的参数化形式。此类HCR网络还能够传播概率分布(包括联合分布),例如$\rho(y,z|x)$。该架构还支持额外的训练方法,例如通过张量分解直接估计$(a_\mathbf{j})$参数,或采用更具生物合理性的信息瓶颈训练策略:各层仅直接影响相邻层,通过优化信息内容以最大化对下一层的信息传递,同时最小化对前一层的依赖以消除噪声并提取关键信息。