This paper investigates the lack of research on activation functions for neural network models in time series tasks. It highlights the need to identify essential properties of these activations to improve their effectiveness in specific domains. To this end, the study comprehensively analyzes properties, such as bounded, monotonic, nonlinearity, and periodicity, for activation in time series neural networks. We propose a new activation that maximizes the coverage of these properties, called LeakySineLU. We empirically evaluate the LeakySineLU against commonly used activations in the literature using 112 benchmark datasets for time series classification, obtaining the best average ranking in all comparative scenarios.
翻译:本文探讨了神经网络模型在时间序列任务中激活函数研究不足的问题,强调需要识别这些激活函数的关键特性以提升其在特定领域的效能。为此,本研究全面分析了时间序列神经网络中激活函数的性质,包括有界性、单调性、非线性和周期性等特性。我们提出了一种能最大化覆盖这些特性的新型激活函数,称为LeakySineLU。我们使用112个时间序列分类基准数据集,将LeakySineLU与文献中常用的激活函数进行实证比较,在所有对比场景中均获得了最佳的平均排名。