Data poisoning attacks are a potential threat to machine learning (ML) models, aiming to manipulate training datasets to disrupt their performance. Existing defenses are mostly designed to mitigate specific poisoning attacks or are aligned with particular ML algorithms. Furthermore, most defenses are developed to secure deep neural networks or binary classifiers. However, traditional multiclass classifiers need attention to be secure from data poisoning attacks, as these models are significant in developing multi-modal applications. Therefore, this paper proposes SecureLearn, a two-layer attack-agnostic defense to defend multiclass models from poisoning attacks. It comprises two components of data sanitization and a new feature-oriented adversarial training. To ascertain the effectiveness of SecureLearn, we proposed a 3D evaluation matrix with three orthogonal dimensions: data poisoning attack, data sanitization and adversarial training. Benchmarking SecureLearn in a 3D matrix, a detailed analysis is conducted at different poisoning levels (10%-20%), particularly analysing accuracy, recall, F1-score, detection and correction rates, and false discovery rate. The experimentation is conducted for four ML algorithms, namely Random Forest (RF), Decision Tree (DT), Gaussian Naive Bayes (GNB) and Multilayer Perceptron (MLP), trained with three public datasets, against three poisoning attacks and compared with two existing mitigations. Our results highlight that SecureLearn is effective against the provided attacks. SecureLearn has strengthened resilience and adversarial robustness of traditional multiclass models and neural networks, confirming its generalization beyond algorithm-specific defenses. It consistently maintained accuracy above 90%, recall and F1-score above 75%. For neural networks, SecureLearn achieved 97% recall and F1-score against all selected poisoning attacks.
翻译:数据投毒攻击是机器学习(ML)模型面临的一种潜在威胁,其目的是通过操纵训练数据集来破坏模型性能。现有的防御方法大多针对特定类型的投毒攻击设计,或与特定的机器学习算法绑定。此外,多数防御方案主要面向深度神经网络或二分类器。然而,传统的多分类器同样需要关注数据投毒攻击的安全防护,因为这些模型在多模态应用开发中具有重要意义。为此,本文提出SecureLearn,一种双层攻击无关防御框架,用于保护多分类模型免受投毒攻击。该框架包含数据清洗和一种新的面向特征的对抗训练两个组件。为验证SecureLearn的有效性,我们提出了一个三维评估矩阵,包含三个正交维度:数据投毒攻击、数据清洗和对抗训练。通过在三维矩阵中对SecureLearn进行基准测试,我们在不同投毒比例(10%-20%)下进行了详细分析,特别关注准确率、召回率、F1分数、检测与纠正率以及错误发现率等指标。实验针对四种机器学习算法——随机森林(RF)、决策树(DT)、高斯朴素贝叶斯(GNB)和多层感知机(MLP),使用三个公开数据集进行训练,并对比了三种投毒攻击和两种现有防御方案。实验结果表明,SecureLearn能有效抵御所测试的攻击。SecureLearn增强了传统多分类模型和神经网络的抗干扰能力与对抗鲁棒性,证实了其超越算法特定防御的泛化性能。该框架始终保持90%以上的准确率,召回率和F1分数均高于75%。对于神经网络,SecureLearn在面对所有选定投毒攻击时实现了97%的召回率和F1分数。