We present a unified representation of the most popular neural network activation functions. Adopting Mittag-Leffler functions of fractional calculus, we propose a flexible and compact functional form that is able to interpolate between various activation functions and mitigate common problems in training neural networks such as vanishing and exploding gradients. The presented gated representation extends the scope of fixed-shape activation functions to their adaptive counterparts whose shape can be learnt from the training data. The derivatives of the proposed functional form can also be expressed in terms of Mittag-Leffler functions making it a suitable candidate for gradient-based backpropagation algorithms. By training multiple neural networks of different complexities on various datasets with different sizes, we demonstrate that adopting a unified gated representation of activation functions offers a promising and affordable alternative to individual built-in implementations of activation functions in conventional machine learning frameworks.
翻译:本文提出了一种针对最常用神经网络激活函数的统一化表示方法。通过采用分数阶微积分中的Mittag-Leffler函数,我们构建了一个灵活紧凑的函数形式,该形式能够在不同激活函数之间进行插值,并缓解神经网络训练中常见的梯度消失和梯度爆炸问题。所提出的门控表示将固定形态的激活函数扩展为自适应形态,其函数形状可从训练数据中学习。该函数形式的导数同样可用Mittag-Leffler函数表示,使其适用于基于梯度的反向传播算法。通过在多个不同规模的数据集上训练不同复杂度的神经网络,我们证明采用统一的激活函数门控表示,相较于传统机器学习框架中内置的独立激活函数实现,提供了一种具有前景且经济可行的替代方案。