Activation functions govern the expressivity and stability of neural networks, yet existing comparisons remain largely heuristic. We propose a rigorous framework for their classification via a nine-dimensional integral signature S_sigma(phi), combining Gaussian propagation statistics (m1, g1, g2, m2, eta), asymptotic slopes (alpha_plus, alpha_minus), and regularity measures (TV(phi'), C(phi)). This taxonomy establishes well-posedness, affine reparameterization laws with bias, and closure under bounded slope variation. Dynamical analysis yields Lyapunov theorems with explicit descent constants and identifies variance stability regions through (m2', g2). From a kernel perspective, we derive dimension-free Hessian bounds and connect smoothness to bounded variation of phi'. Applying the framework, we classify eight standard activations (ReLU, leaky-ReLU, tanh, sigmoid, Swish, GELU, Mish, TeLU), proving sharp distinctions between saturating, linear-growth, and smooth families. Numerical Gauss-Hermite and Monte Carlo validation confirms theoretical predictions. Our framework provides principled design guidance, moving activation choice from trial-and-error to provable stability and kernel conditioning.
翻译:激活函数主导着神经网络的表达能力与稳定性,但现有的比较方法大多仍停留在启发式层面。本文提出一个通过九维积分特征S_sigma(phi)对激活函数进行严格分类的框架,该特征综合了高斯传播统计量(m1, g1, g2, m2, eta)、渐近斜率(alpha_plus, alpha_minus)以及正则性度量(TV(phi'), C(phi))。该分类体系确立了适定性、带偏置的仿射重参数化定律,以及在有界斜率变化下的封闭性。动力学分析产生了具有显式下降常数的李雅普诺夫定理,并通过(m2', g2)识别出方差稳定区域。从核函数视角出发,我们推导了与维度无关的Hessian上界,并将光滑性与phi'的有界变差性质联系起来。应用该框架,我们对八种标准激活函数(ReLU、leaky-ReLU、tanh、sigmoid、Swish、GELU、Mish、TeLU)进行了分类,严格区分了饱和型、线性增长型及光滑型函数族。数值上的高斯-埃尔米特积分与蒙特卡洛验证均证实了理论预测。本框架为激活函数的选择提供了基于原理的设计指导,使其从试错过程转向可证明的稳定性与核条件数优化。