Explainable Artificial Intelligence (XAI) aims to uncover the decision-making processes of AI models. However, the data used for such explanations can pose security and privacy risks. Existing literature identifies attacks on machine learning models, including membership inference, model inversion, and model extraction attacks. These attacks target either the model or the training data, depending on the settings and parties involved. XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access, thereby keeping model parameters and architecture private. To exploit this risk, we propose AUTOLYCUS, a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings. As XAI tools, we exploit Local Interpretable Model-Agnostic Explanations (LIME) and Shapley values (SHAP) to infer decision boundaries and create surrogate models that replicate the functionality of the target model. LIME and SHAP are mainly chosen for their realistic yet information-rich explanations, coupled with their extensive adoption, simplicity, and usability. We evaluate AUTOLYCUS on six machine learning datasets, measuring the accuracy and similarity of the surrogate model to the target model. The results show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks, while maintaining comparable accuracy and similarity. We validate its performance and transferability on multiple interpretable ML models, including decision trees, logistic regression, naive bayes, and k-nearest neighbor. Additionally, we show the resilience of AUTOLYCUS against proposed countermeasures.
翻译:可解释人工智能(XAI)旨在揭示人工智能模型的决策过程。然而,用于此类解释的数据可能带来安全与隐私风险。现有文献已识别出针对机器学习模型的多种攻击,包括成员推理攻击、模型反演攻击和模型窃取攻击。根据具体场景和参与方,这些攻击的目标可能是模型本身或其训练数据。XAI工具可能加剧模型窃取攻击的脆弱性,这在模型所有者倾向于提供黑盒访问(从而保持模型参数与架构私密)时尤为令人担忧。为利用此风险,我们提出了AUTOLYCUS——一种针对黑盒设置下可解释模型的新型基于重训练(学习)的模型窃取攻击框架。作为XAI工具,我们利用局部可解释模型无关解释(LIME)和沙普利值(SHAP)来推断决策边界,并创建能复现目标模型功能的代理模型。选择LIME和SHAP主要基于其既真实又信息丰富的解释特性,以及其广泛的采用度、简易性和可用性。我们在六个机器学习数据集上评估AUTOLYCUS,通过代理模型与目标模型的准确率及相似度进行衡量。结果表明,AUTOLYCUS具有极高有效性,在保持相当准确率与相似度的同时,所需查询量显著少于现有先进攻击方法。我们在多种可解释机器学习模型(包括决策树、逻辑回归、朴素贝叶斯和k近邻算法)上验证了其性能与可迁移性。此外,我们还展示了AUTOLYCUS对现有防御措施的强韧性。