Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
翻译:深度学习目前已在众多研究领域得到广泛应用。深度学习技术的持续进步有助于解决复杂挑战。激活函数是神经网络中的基本组成部分,使其能够捕捉数据中的复杂模式和关系。通过引入非线性,激活函数使神经网络能够建模并适应现实世界数据多样且细微的特性,从而提升其在各种任务中做出准确预测的能力。在入侵检测领域,本研究将近期提出的Mish激活函数应用于CNN-BiGRU模型,并使用了ASNM-TUN、ASNM-CDX和HOGZILLA三个数据集进行验证。与广泛使用的整流线性单元(ReLU)激活函数的对比表明,Mish在评估的所有数据集上均优于ReLU,展现出更卓越的性能。本研究揭示了激活函数在提升入侵检测系统性能方面的有效性。