Counterfactual explanations (CFs) are increasingly integrated into Machine Learning as a Service (MLaaS) systems to improve transparency; however, ML models deployed via APIs are already vulnerable to privacy attacks such as membership inference and model extraction, and the impact of explanations on this threat landscape remains insufficiently understood. In this work, we focus on the problem of how CFs expand the attack surface of MLaaS by strengthening membership inference attacks (MIAs), and on the need to design defense mechanisms that mitigate this emerging risk without undermining utility and explainability. First, we systematically analyze how exposing CFs through query-based APIs enables more effective shadow-based MIAs. Second, we propose a defense framework that integrates Differential Privacy (DP) with Active Learning (AL) to jointly reduce memorization and limit effective training data exposure. Finally, we conduct an extensive empirical evaluation to characterize the three-way trade-off between privacy leakage, predictive performance, and explanation quality. Our findings highlight the need to carefully balance transparency, utility, and privacy in the responsible deployment of explainable MLaaS systems.
翻译:反事实解释正日益融入机器学习即服务系统以提升透明度;然而,通过API部署的机器学习模型本就面临成员推断和模型提取等隐私攻击的威胁,而解释机制对此类风险格局的影响尚未得到充分认知。本研究聚焦于反事实解释如何通过强化成员推断攻击来扩展MLaaS的攻击面,并探讨如何设计防御机制以缓解这一新兴风险,同时保持模型效用与可解释性。首先,我们系统分析了基于查询的API暴露反事实解释如何提升基于影子模型的成员推断攻击效能。其次,我们提出一个融合差分隐私与主动学习的防御框架,通过联合降低模型记忆效应与限制有效训练数据暴露来增强隐私保护。最后,我们通过大量实证评估刻画了隐私泄露、预测性能与解释质量之间的三重权衡关系。研究结果凸显了在负责任地部署可解释MLaaS系统时,必须审慎平衡透明度、效用与隐私保护。