This paper explores the vulnerability of machine learning models, specifically Random Forest, Decision Tree, and K-Nearest Neighbors, to very simple single-feature adversarial attacks in the context of Ethereum fraudulent transaction detection. Through comprehensive experimentation, we investigate the impact of various adversarial attack strategies on model performance metrics, such as accuracy, precision, recall, and F1-score. Our findings, highlighting how prone those techniques are to simple attacks, are alarming, and the inconsistency in the attacks' effect on different algorithms promises ways for attack mitigation. We examine the effectiveness of different mitigation strategies, including adversarial training and enhanced feature selection, in enhancing model robustness.
翻译:本文探讨了机器学习模型(特别是随机森林、决策树和K近邻算法)在以太坊欺诈交易检测场景中,面对极其简单的单特征对抗性攻击时的脆弱性。通过全面实验,我们研究了不同对抗性攻击策略对模型性能指标(如准确率、精确率、召回率和F1分数)的影响。研究结果揭示了这些技术对简单攻击的易感性,其严重程度令人警醒;而攻击对不同算法影响的不一致性,则为攻击缓解提供了潜在途径。我们进一步检验了多种防御策略(包括对抗性训练和增强的特征选择)在提升模型鲁棒性方面的有效性。