Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection. Leveraging the dynamic sparse training (DST) algorithms within SNNs has demonstrated promising feature selection capabilities while drastically reducing computational overheads. Despite these advancements, several critical aspects remain insufficiently explored for feature selection. Questions persist regarding the choice of the DST algorithm for network training, the choice of metric for ranking features/neurons, and the comparative performance of these methods across diverse datasets when compared to dense networks. This paper addresses these gaps by presenting a comprehensive systematic analysis of feature selection with sparse neural networks. Moreover, we introduce a novel metric considering sparse neural network characteristics, which is designed to quantify feature importance within the context of SNNs. Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50\%$ memory and $55\%$ FLOPs reduction compared to the dense networks, while outperforming them in terms of the quality of the selected features. Our code and the supplementary material are available on GitHub (\url{https://github.com/zahraatashgahi/Neuron-Attribution}).
翻译:稀疏神经网络已成为高效特征选择的强大工具。在稀疏神经网络中利用动态稀疏训练算法已展现出有前景的特征选择能力,同时大幅降低了计算开销。尽管取得了这些进展,特征选择的若干关键方面仍缺乏充分探索。关于网络训练中动态稀疏训练算法的选择、特征/神经元排序指标的选择,以及这些方法在不同数据集上与密集网络相比的性能比较等问题仍然存在。本文通过对稀疏神经网络特征选择进行全面系统分析来填补这些空白。此外,我们提出了一种考虑稀疏神经网络特性的新型指标,该指标旨在量化稀疏神经网络背景下的特征重要性。我们的研究结果表明,与密集网络相比,采用动态稀疏训练算法训练的稀疏神经网络进行特征选择平均可实现超过$50\%$的内存和$55\%$的浮点运算量减少,同时在所选特征质量方面优于密集网络。我们的代码和补充材料可在GitHub上获取(\url{https://github.com/zahraatashgahi/Neuron-Attribution})。